CIOs and IT managers in Kuwait choose to tackle huge volumes of data with traditional storage strategies and technologies, but in truth a revolution in data management needs to take place in order minimise the expensive heavy-lifting needed to protect and manage large volumes of data
Kuwait City, Kuwait, 24th May, 2013: There is no doubt that data management has gone through an evolutionary process, starting with a focus on straight forward backup and recovery in the 1990’s and then moving to integrated protection solutions in the 2000s. However the impact of Big Data and the move to adopt a more mobile way of working again requires a radical new approach.
Whether it is stored in one place, distributed across the enterprise or located in different geographies, data growth is making its presence felt in companies of all sizes in Kuwait. It’s unfortunately becoming all too common for yesterday’s backups to run into tomorrow and too many companies are reporting that they risk missing Service Level Agreements (SLA) if processes can’t keep up. In itself this is perhaps an indication that existing storage strategies are no longer capable of tackling huge volumes of data or, just as importantly, of supporting the business challenges of today.
Allen Mitchell, Senior Technical Account Manager, MENA at CommVault Systems says that a revolution in data management in Kuwait is now called for in order to minimise the investments needed to protect and manage large volumes of data both inside and outside of the data centre. This means thinking differently about each of the traditional data management processes and technologies and increasing the use of automation and enterprise-wide policies to introduce efficiencies and cost savings whilst reducing management headaches at the same time. Instead of creating archive, backup and storage silos merely as a business insurance policy, each element should be valued equally but then converge to streamline the process and start to make data the real asset that it should be to an organisation.
So far, it’s likely that making data work for an organisation will have involved trying to mine large volumes of existing data across multiple storage types and locations to gain new insights – perhaps for research projects or compliance challenges but this can then put a massive strain on operational staff who are already overstretched. Adopting the cloud and looking at virtualisation is widely accepted as one of the best ways to increase efficiency but it also adds another layer of complexity to data management which must be addressed. The Bring Your Own Device (BYOD) movement again offers potential savings but also introduces new risks to data security at the edge as well as the challenge of getting information on to these devices efficiently.
It should be no surprise then that according to Gartner, 2012, only 26% of CIOs think they have effective tools and skill sets in place to manage these issues and to make data an asset to the organisation. What is needed is a scalable modern information management architecture that is fit for purpose; helping businesses to protect and manage more information more efficiently. It should also be able to meet the demands of a virtual world that is typified by a dynamic pool of computing and storage.
At the heart of this modern information structure should lie the ability to collect data just once to reduce the overall storage and infrastructure burden, and then enable access to it from anywhere in order to re-purpose it. Snapshot, backup and archive data operations already take place, so why not combine those tasks into a collection process that sends deduplicated data to single ‘content store’? This drives a large number of efficiencies; the content store can then be used for recovery or repurposing of the data. Generating metadata in the form a content index at the same time is a key process that enables the access downstream. Combined with good access and security controls this repository can then be presented to the whole workforce and potentially from any device. When set against the traditional multiple silo approach where backup stores just sit there offering little value except insurance and use invasive technologies to acquire the data in the first place, one could wonder why a streamlined collection to combined multi-purpose store isn’t the norm. Having a searchable content store may also negate the need for a dedicated and expensive ‘Big Data’ repositories if you can search and recover just the research data needed to perform your analysis.
Controlling the amount of data on live systems by moving to automated policy-based retention with search capabilities will then certainly increase the ability of companies to find the right piece of information when it’s needed, practically on demand. Using a combined data management and search engine also means that a single search can cover many different repositories allowing the organisation to leverage different types of storage media; from dedupe disk through to removable media or cloud storage.
The ultimate goal for any business has to be about changing the way its global and connected workforce discovers, accesses and analyses information in the future; ideally without requiring IT intervention. A single, central, virtual repository that keeps only one copy of all data secure, de-duplicated, tiered to optimise costs and application aware will be critical to achieving instant ways to search and make sense of existing data to enable faster decision making. It will obviously reduce storage needs overall but more importantly it will transform the administrative process into a more dynamic capture of business critical information that can be ‘re-used and recycled’. In a world where regulatory requirements, compliance and the need to demonstrate a strong return on investment (ROI) are constantly under scrutiny in the boardroom, a content resource such as this will dramatically improve the ability to search for, and access, data with ease from any device anywhere.
This access challenge really affects BYOD strategies; while they may offer the promise of reduced cost and happy employees, they also present risks of corporate or sensitive data becoming detached from the enterprise with no ability to control access or expire the information. Getting the right mix of the individual’s own work related data and the desired corporate information, not just email, to tablets and smartphones can be tricky. If and the organisation doesn’t provide effective enablement tools, users are left with public cloud offerings that at best are awkward to manage and at worst risk significant data breach.
The ability to get files to any device from a secure corporate content store has to be matched by transparent and automated collection of data created at the edge as well as from primary systems. The collection process from user’s laptops, by default, also provides them with a backup which for the most part can be very low impact when leveraging the latest dedupe and other technologies. This vehicle not only frees users from the pain of managing external cloud solutions to get their data onto their smart device, but it also brings the data they create for their job into the organisation’s control. Not only are workers more productive but it provides a platform for good corporate governance or compliance that fully extends to edge data.
Putting centralised content and sophisticated indexing in the data management driving seat, instead of focusing on the individual components that play a role in the data management process overall, is sure to be the most effective way to help companies of all sizes to take control of Big Data going forward. Content-based retention policies that only keep data that really matters to a business is set to change the way we view and utilise data, re-purposing data from the archive and bringing it back to life instead of consigning it to history.