In-Depth

Sometimes It’s Not a Storage Issue

Trying to reconcile the technologies and products offered by the storage industry with the actual problems confronting IT can seem like a thankless job.

The last several weeks have been hectic ones, punctuated by trips on the east and west coasts. Two weeks ago, I found myself in Southern California in the offices of a large financial firm, and a week later, talking to an IT person from the United Nations in New York City. Both had concerns about the adequacy of their data protection schemes, but neither had what could properly be characterized as a storage-related problem.

The Southern California meeting featured several top people from the financial firm’s IT organization. I was there to help a storage analyst (who was part of Sun Microsystems’ business continuity consulting practice) report his findings with respect to the client’s data protection status. Basically, it was my job to validate the consultant’s study and to dispel any doubts about “objective purity” of its research and recommendations—despite the consultant’s vendor affiliation and the fact that he was proposing an overhaul of the company’s backup solution to the tune of about $1.5 million.

After reviewing the Sun consultant’s work, I reported my thoughts. He had done a thorough job of assessing the deficits in the client’s backup architecture and his proposed fix was both measured and well-suited to the client’s stated requirements. Moreover, some of his findings demonstrated that the client might be well served by deploying an archiving strategy to slow data growth-driven CAPEX expenditures for new arrays. With effective data management, I agreed that the bulk of the client’s data protection needs well into the future would be addressed effectively by the proposed investment in tape technology.

One opportunity for archiving that screamed out at me, I noted, was a substantial Sybase database, which had grown very large. Any time a DB has been in place, operating, and un-archived for nearly 10 years, it sends up a red flag for me. Too many times, bloated databases are a repository for seldom-accessed data—data that had gone to sleep or gone stale, and might be an effective candidate for offloading to an archive.

Were they aware of the options for archiving databases—from Outer Bay (now an HP product) to Princeton Softek to perhaps the granddaddy of them all, Grid Tools? Any one of these tools, I suggested, would enable the company to extract older data, with all of its relationships intact, for inclusion in a data mart that could then reside either on cheaper disk that was more appropriate to its limited use or on tape. Such a strategy would buy back a considerable amount of space on their Hitachi Data Systems storage, I suggested.

They bristled at the idea for what I first mistook as a product brand problem. Sybase, they noted, already offered some archiving functionality as part of its database toolkit. For now, they were simply trying to meet the data burgeon with an effective combination of tape, disk-to-disk, and maybe virtual tape or continuous data protection, or both. They readily admitted that disaster recovery backup via a combination of tape backup, mirroring, and continuous data protection was on the minds of their company’s business managers. Monies had been allocated to improve these processes. They were also looking for guidance on improving their bit-level replication capabilities, possibly to include some sort of de-duplication/compression technologies and possibly some encryption solutions.

What did I think of Data Domain? What about NeoScale and Decru for encryption? I gave them my views of appliances and my concerns about inherent scaling issues with “tin wrapped software solutions,” not to mention power consumption/BTU generation problems. I suggested they look into software options for de-duplication and less expensive and more generic storage options such as Zetera-enabled Bell Micro Hammer arrays, and purpose-built hardware encryption solutions such as DISUK’s Paranoia products. I also offered my views that Caringo’s software-based content-addressable storage solutions might provide an effective overlay for long-term retention and fast recall of financial records and files.

Finding a Strategy

When I asked again about archiving strategies, long shadows descended on their faces. When we were later joined by the firm's CIO, I understood the reason. He was a straightforward “if it ain’t broke, don’t fix it” person, very pragmatic in his reasoning. He seemed to make my case for archiving when he noted that the possibility of his Sybase database failing or becoming corrupted was the disaster recovery scenario that “really kept him awake at night.”

He hadn’t yet seen an effective CDP solution for Sybase. They were using some of the vendor’s own snapshot capabilities, but he doubted that the many applications using the database could be recovered in any efficient manner if the database crashed.

“Some of our users want 20 years of data online all the time, so they can reconcile historical transactions if needed,” he said, sounding somewhat exasperated. This was the first impediment for any strategy aimed at archiving-then-deleting older data to free up space.

Secondly, he doubted that any sort of coherent archive strategy could be made to work with the system, which was showing major signs of age, as well as data bloat. Some parts of the system were actually coded in COBOL and FORTRAN, he noted, reflecting its long history and many architectural masters over the years. To archive this beast effectively might well require a complete rewrite of the database and applications themselves, something that he would be hard pressed to find the money for.

Was the database mission-critical? Absolutely, he said. It generated nearly $200 million in revenues for the company last year alone.

Had he tried to recover the database at an alternate site to demonstrate the (in)efficacy of current data-protection and DR strategies? Only in parts, he said. He added that he seriously doubted that recovery would be possible in a reasonable timeframe.

So, I summarized, you have a mission-critical application that generates significant funds for the company and that you doubt is recoverable in an emergency. Had this been clearly stated to the Front Office? It had. They had simply decided not to incur the huge expense to rewrite the application at this time, even though it might be a disaster waiting to happen.

If the application were redesigned, I asked, might it be possible to use it to generate additional revenues for the company? Perhaps by enabling the firm to buy out portfolios of accounts from other sources? He reported that while such opportunities might be enabled, they still did not seem to merit the investment that would be required to rewrite the application.

We found ourselves at an impasse, with the CIO seeming as perplexed as I was about how to proceed. He was hoping against hope that a “bolt-on” solution would miraculously appear in the market to protect the database and ensure its recoverability, especially since there was zero interest among management to attackthe core problem and fix the actual source of risk: poor application design.

A Second Tale

A week later in New York, an IT person posed a similar conundrum. During a break at a seminar I was hosting on behalf of CA, the man told me his tale.

He was deeply concerned about the efficacy, not to mention the waste, associated with the current data protection strategy he administered at the United Nations. He was taking routine backups, but often he was backing up the backups already made to disk of various delegations to the international body.

He wanted to consolidate and rationalize the jobs so that he would not be wasting time and resources. Could the delegates identify the files, folders, or other objects that they wanted backed up to tape so he could administer and optimize the overall process? Without these procedural changes, his backup windows were inadequate to the task of moving so much, he said. Restores might be impossible, he added—, he had never done a full restore successfully.

I asked whether he had suggested such a sensible strategy to his users. He bristled, noting that some had been downright rude in their rejection of the strategy, informing him that he was only at the U.N. to serve them, not to tell them what to do or how to do it.

The man seemed genuinely perplexed. So, I offered what little advice I could. First, “you must at least try to do a full restore.” Such an effort was needed to identify the scope of his problem and the operational impediments to successful, expedient recovery. At best, he would be able to develop a baseline dataset to describe the status quo; at worst, the exercise would demonstrate just where impediments existed to efficient recovery. He should document everything.

Then, he needed to define his vision for the best data protection/data recovery strategy that would rectify existing impediments and hurdles.

Merging the two would provide a “one-two” punch—a concise description of the problem and a suggestion for improvement—that he could then articulate to the appropriate person in the chain of command. Do it in writing, I urged him, because when a data disaster occurs, he might find it politically useful to have the situation documented. The existence of such a document might help deflect the wrath of the end users from settling on his shoulders. (I also told him that he might do well to update his resume in either case.)

The Common Thread

In both of these situations, I found myself at odds trying to reconcile the technologies and products offered by the storage industry with the actual problems confronting the IT contingent. Poor application design, recalcitrant management, and uncooperative end users are factors that cannot be addressed effectively in data protection planning through the application of a new-and-improved data replication, high availability or tape backup solution—regardless of the brand name on the box.

At the end of the meetings in both instances, I couldn’t help but question the ultimate value of my advice. Their root problems were left largely unaddressed. My only consolation was that they might make some interim improvements by using the technologies we discussed and that I didn’t hype the recommended wares as panaceas for their data protection requirements.

Your insights are welcomed. [email protected].