Welcome!

Release Management Authors: Pat Romanski, Elizabeth White, David H Deans, Liz McMillan, Jnan Dash

Related Topics: @CloudExpo, Containers Expo Blog, Release Management , Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Back Up Data Correctly to Avoid a Disaster, Even When Disaster Strikes

As the tech landscape evolves, data storage practices need to be carefully considered & amended to meet changing requirements

The convergence of increased data stored on company networks and stricter compliance regulations dictating the length of time the information needs to be stored for have helped cloud storage explode in popularity. The cloud offers an easy-to-use, scalable and cost-effective solution for data storage. However, organizations must seriously consider how they manage their storage from a back-up and disaster recovery perspective. While there is no doubt that cloud computing can speed disaster recovery - from reducing the time it takes to restore data, to the fact that information is stored off-site, alleviating the risk of natural disasters - incorrectly managing the storage can often prove more of a hindrance than a help. Whether an employee accidently deletes a file, or a more sinister hack on the company network takes place, for most organizations it is inevitable that data recovery will need to occur at some point. Planning for disaster is essential and having in place an effective back-up and disaster recovery process can save headaches down the line.

As the tech landscape evolves and organizations increasingly have to adapt to new trends, such as virtualization and unstructured Big Data, data storage practices need to be carefully considered and amended to meet changing requirements. There are a multitude of options available that can cause IT teams to struggle with understanding the best solution for their organizational needs. Companies often fail to consider future scenarios when making decisions and, instead, focus on their needs at the current time. This has the potential to cause problems down the line, particularly when it comes to back up and disaster recovery strategies.

From hardware failure to network hacks the potential for data loss is huge. A recent survey by independent research firm, TechValidate* revealed that significant hardware failures occur far more frequently than many may believe. Cited in the survey, 52 percent of respondents had seen a failure within the last year and of that number, 37 percent had suffered the loss within the last six months. However, the same study also revealed that 81 percent of organizations do not have a tried and tested back-up and disaster recovery strategy in place. What is alarming about these statistics is the fact that disaster recovery will be an inevitable requirement at some point for almost every business, but most have not prepared for the eventuality.

If more than two-thirds of U.S. companies have not tested their disaster recovery strategies, chances are they will have no idea how long it will take to restore their business-critical data if disaster were to strike. Where data is stored will make all the difference. While storing data all in one place may once have been the norm, this need not be the case with a cloud solution. In fact, storing everything in one environment can contradict a number of the cloud's value propositions, leading to adverse financial and disaster recovery effects. Cloud storage is a relatively cheap commodity, but storing everything - from emails about company social events to key customer information - all in one place can rapidly become expensive, even in the cloud. Also, from a practical point of view, it's likely that a lot of information stored within the company will never be looked at again and while compliance initiatives dictate that data has to be retained for a certain period of time, the location is up to the organization. There is therefore no reason to store the everyday essential information in the same location as the ‘never-again' information.

Further, if an outage occurs, any company will need to get its business-critical information back as close to immediately as possible. But if every piece of company information recorded over the last 10 years is being recovered at once, the process will be hindered and take far longer than necessary, or feasible, for business operations. This will not only cause serious headaches for anyone who needs access to the data, but it could also cost millions in lost revenue. Imagine a retail outlet not being able to process payments correctly because their server has gone down and they can't get it back up quickly enough because of all the other less essential information that they are restoring. The revenue lost could be extremely detrimental.

Storing by Importance
A new approach should be considered in order to ascertain where data should be stored. A key element that must be a part of your disaster recovery plan is the idea of "tiering" the data to be recovered based on its overall business importance. This allows resources to be correctly proportioned with the budget requirements and business impact.

The first step should be deciding which applications and data are business critical and which are not. This will then allow the data to be grouped depending on its importance and a ‘storage hierarchy' can be put in place. Data that does not need to be accessed frequently can be placed in lower cost storage that may take days to recover, while business-critical information should be placed in more expensive storage where it can be recovered quickly. In the event that a system's restore is necessary, irrelevant information will not slow the process down and everything can be returned at a speed that is appropriate to its importance.

Most companies will have vast amounts of data and manually deciding what data is stored where would be a laborious process for an individual, or even a team after the initial segregation has taken place. Therefore, once the hierarchy has been put in place, it can be combined with an automated system that intelligently tracks and tags all data based on predefined rules, and automatically diverts it to the correct location. Not only does this allow IT teams to focus on more value adding tasks, but also guarantees all data is backed-up, without concern that anything may have been missed.

With these systems in place, businesses can test and tweak their strategies and be sure that in the event of an outage, their applications, data and systems are only the touch of a button away. Planning, implementing and testing data recovery techniques help make the actual disaster, be the only disaster.

*Survey conducted by independent research firm, TechValidate, December 2012.

More Stories By Bob Davis

With more than 25 years of software marketing and executive management experience, Bob Davis oversees Kaseya’s global marketing efforts. He applies significant experience from marketing network and system management solutions to directing Kaseya’s strategy, product marketing, branding, public relations, design and social networking functions One of the original founders of the company, Davis returned to Kaseya in 2010.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...