News

Success Stories from the Age of Legacy Integration

It’s a challenge to bring the mainframe into the modern age of service-oriented architecture but there’s a huge payoff once the job is done.

When Applications Manager Roger Lanka of FirstMerit Bank and his client/server development team opted for DataDirect’s Shadow zServices, they were simply looking to speed response times on their customer site.

Shadow zServices provides a bi-directional channel between mainframe data and Web services so developers are able work from mainframe screens in the languages they’re already familiar with.

Not only did the development team at FirstMerit increase uptime and response speed, they’re now able to create a wealth of new functions in a fraction of the time it once took them to simply maintain the older ones. “Within two months, we can now accomplish what it used to take us six to eight months to get done,” Lanka marvels.

“If a developer is conversant in .NET, once a user understands the interface, it’s just like they’re programming in .NET,” says Shawn King, a dBASE programmer and analyst at FirstMerit.

His team now interfaces several CICS transactions in-house in an environment flexible enough to create a broad range of new apps. The transition was almost effortless—one person was able to get three colleagues up and running with the new interface in only 3 hours.

Not too surprising: Several top app providers see huge potential in marketing products that expose mainframe data as Web services and are developing a multitude of integration solutions for the big irons that still reside in the business basement. New runtime environments act as middlemen, while other new services are putting a much more accessible front end on legacy functions.

Many industries, especially insurance, healthcare and government, still rely on their mainframes for 75 percent of their data processing. With SOA as the new model for enterprise IT, plenty of businesses are now looking to let loose a significant amount of data stored on mainframes once jammed behind major integration roadblocks.

Technology’s bling-bling
The basic tenet of SOA is easy to comprehend: In the ideal situation, IT concepts are refined into tidy beads of functionality, strung quickly and easily and then reassembled to meet the changing fashions. Involving the mainframe can seem as relevant as wearing a polyester jumpsuit, but if the old outfit can be updated, businesses stand to reap exponential rewards.

Vista Healthplan, which provides insurance services to more than 300,000 members and 100,000 employers in Florida, found itself entering data twice—once into their front-end business app and then again into PowerMHZ, their IBM iSeries app, directly affecting service levels for its members.

“The business logic used to relate the underlying tables resides in PowerMHS’s COBOL generated screens,” explains Jose Contreras, vice president of information technology for Vista Healthplan. “We could access the iSeries based tables directly, but to do so would require us to replicate the business logic already built within the PowerMHS system.”

P-a-r-t-y
Before mainframes were invited to the SOA party, solutions involved moving apps off the mainframe onto the server to expose them to the Web. This is as expensive as it is time-consuming, and by removing this step, enterprises stand to save some serious dough.

Businesses also prize increased agility—essentially defined by speed to market. If businesses can reuse the proven solutions on the mainframe or access them faster, they stand to become leaner, meaner and richer.

Reliability is the big point of favor still in mainframe’s corner. Developers are well-acquainted with the solid, time-tested services on their mainframes, and reusing them, instead of relying on less reliable distributed platforms, increases stability in an agile, cost-saving environment.

Vista, for example, opted to employ Shadow z/Services to develop an integrated solution for its mainframe data. The product gives Vista direct access to its mainframe data in an environment that creates Java and .NET components from COBOL business logic for critical apps, which are exposed to end users.

Vista says it has improved service levels by leaving the old green-screen app coding behind and is well on its way to full-scale SOA. The medical benefits provider has streamlined the way it accesses its legacy data and has reduced the time needed to handle member inquiries and plan updates.
 
Oh, SOLA mio
Merrill Lynch invested billions of dollars in an environment that processed 80 million CICS transactions daily. Jim Crew, a 14-year veteran, and his development team needed a solution for legacy app integration. "At Merrill Lynch, the majority of the business runs on the mainframe," Crew says. "It's difficult to reuse those billion-dollar investments in newer distributed applications. The right way is to do it using Web services.”

Using X4ML, which Crew developed, Crew and his team made the mainframe part of Merrill Lynch’s SOA. Fast forward a few years, and Merrill Lynch has exposed 420 CICS apps as services. “When we did performance testing, there was a tenfold improvement in performance time and the number of transactions we could process,” Crew says.

SOA Software adopted X4ML along with Crew and his team, and created SOLA (Service-Oriented Legacy Architecture) from their expertise.

Meanwhile, Merrill Lynch now processes about 2 million SOLA transactions daily, and estimates SOLA saves it $500,000 to $2 million per application through cost avoidance and direct savings. “We had estimated about 800K dollars using traditional technology to build a system,” says John McKinley, Merrill Lynch’s former CTO. “By embracing SOLA, we did the project for 30K dollars.”

“We didn’t start off on this tack,” Crew, now vice president of SOLA at SOA, says. “After the Y2K bust and post 9-11, the economy was in an uncertain state. We were all looking for ways in which we could really take cost out of the infrastructure while at the same time improving reliability and speed.”

Cost saving, then, was the primary factor, and Crew knew leaving the mainframe in place and re-using the existing assets would pay off. “The right thing to do was to incorporate the mainframe into a service-oriented architecture,” he explains.

“We didn’t want to add another simple endpoint-type software to the hundreds of other pieces of specific software to add to the problem,” Crew recalls. “We needed a holistic solution to our overall software problem.” His goal was to make the mainframe look just like any other endpoint in the system, which would improve operations across the board.

Right people, right place
A full-scale tune-up of internal operations was also the goal for Terry Nafe, a systems architect at a Midwestern insurance company confronted with the task of making data easier to access for employees serving customers and handling policies.

Exposing the mainframe in a SOA was driven by a need to flow vital data locked up in a mainframe into the company’s new call center where it could be used by customer service reps, underwriters and field agents. “We needed to extend functionality to the right people,” Nafe says.

Previous approaches to accessing the mainframe data were not standardized and not easily regulated, and Nafe found using XML over HTTP to be problematic. The company’s programmers were spending half their time deciphering and parsing out XML documents and negotiating an overly complicated interface—time better spent solving the challenge of passing data between the mainframe and a Java app.

“It got extremely technical in a big hurry, and we were wasting time and money,” says Nafe. “We wanted to get out of that infrastructure business.”

Nafe’s team chose GT Software’s Ivory solution, a two-part integrated toolset. The first piece, a server, provides the runtime environment that handles the XML message requests and creates the output response in each CICS mainframe region. The server also coordinates complex service operations, where Nafe’s team previously did the coding entirely by hand. The second component is a workstation, which the COBOL programmer began using to define, name and set values for individual Web services and to map each service to mainframe assets, COBOL programs and even screen flows.

The objective was to put the tool into the hands of the IT department’s COBOL developers, which Nafe believed was preferable to using its Web development team to create another business domain. It made good sense: The people who support and maintain the mainframe system are in the best position to determine optimal business services and select the programs needed to satisfy those goals.

With the ability to extend legacy into modernity, Nafe found a whole new world of options such as having the ability to create new extensions quickly and cost-effectively.

All is copacetic in the company’s customer call center, where representatives handle transactions for customers making changes to their policies. “We’re able to now process 50 percent of those transactions straight through by the time the customer hangs up the phone, and we’ve got 50 percent of those transactions processed without anyone else having to touch them,” Nafe says.

The company’s also extended its legacy system through its Web interface, where customers can perform update-type transactions online. About 12 percent of these transactions are processed automatically creating significant savings for the company.

Additionally, legacy integration has made daily life much easier for the development team. A new site was fully tested and production-ready in only 3 weeks, even though much more time had been reserved for the testing phase.

“It’s a testimony to how you can shorten development time and project lifecycle if you can reuse legacy components to build a new solution,” Nafe notes. “In the past, we would have built logic from scratch, which is expensive, and created much more issues in the test phase.”

Many grains of salt
For organizations preparing for legacy integration, there are still challenges. The issue of granularity comes up time and again because it’s one thing to build a service, but precisely identifying it is another thing entirely. A service can be built to satisfy every kind of request and to return the maximum amount of information needed but, of course, why court that time consumption and performance stress when the user’s query only needs four results? On the other end of the spectrum, a service acting as a fine-toothed comb can be tedious and ineffectual when seeking broad results.

Companies must also examine how their various services are interconnected. No mainframe is truly independent—they’re usually apps linked in a chain.

Apps are often shared by disparate groups, so the situation can arise where three groups are building different sets of code to access one mainframe app. This sort of fragmentation can complicate, rather than simplify, the integration strategy. So, the initial stage of mainframe integration should be focused on strategic alignment and use of the resources that already exist before new ones can be built.

Coulda, shoulda
Security continues to be another major issue: Just because you can expose an app as a service doesn’t mean you should. Mainframes were built for reliability, not for proper validation and authentication. Exposure is a key word here: these services are out there, publishing details of specific business actions that anyone with impure motives may be very eager to get their hands on. So, when integrating, focus strongly on security and encrypt, encrypt, encrypt. One positive is that mainframes have yet to see the same ravages by viruses and security hacks suffered by PC and server operating systems. Diligent, proactive security-minded users can keep things that way. 

Four benefits of legacy integration
Ron Schmelzer, an analyst at ZapThink, identifies four primary benefits to exposing data stored on mainframes using a Web service standards-based interface:

  • Lower cost: The mainframe can essentially act just like any other app in the architecture, so organizations can build new apps regardless of what’s underneath.
  • Reuse:  Service built on the mainframe never has to be rebuilt. With the mainframe fully integrated, its services are usable again and again in any scenario.
  • Agility: A system is most agile and efficient when you change the code and rely on a consistent configuration. Mining through ages of legacy code is complex and even risks data loss, and swapping configurations endlessly doesn’t make much sense, either.
  • Visibility: Operating blind is just bad for business logistics. With different systems and different languages attempting to form an architecture, planning for change is difficult, and seeing a change’s impact becomes nearly impossible among the convolution. “With compliance requirements in particular, decreased visibility becomes a severe handicap, and mainframe has previously been a big black hole,” Schmelzer adds. “Just look at what happened with Y2K.”

The finer points of granularity
No single app or service is a catch-all for every business need. When creating a service to access data on the mainframe, the question to ask when determining the desired granularity is “Who is using the service?”

For example, a customer service group wants to use an in-house service to access customer info stored on the mainframe when responding to users in a call center. Their customers have specific questions, and the representatives must be able to collect and modify their information quickly. If their request for data yields too large a volume of information, they’re losing time and complicating their service model.

Now, let’s say there are 200 hundred service reps in this call center, and all of them must make the same query repeatedly throughout the day. It then becomes essential the service returns only the density of information the reps absolutely require, lest they tax system performance, overload the network, and slow their responsiveness to their customers.

At the same company, a team of underwriters also needs to access legacy data when evaluating policies based on several pieces of information from various sources. If they’re using the same app as the customer service group, each underwriter may end up making repeated multiple queries to obtain the desired data, placing stress on the infrastructure and losing valuable time repeating the same process for each piece of info they need.

This is a simplified example, but it’s easy to understand why poor planning for granularity requirements could turn a legacy integration strategy from a miracle cure to a crashing disaster.

Even with his successful experience with mainframe and SOA integration, Terry Nafe, a systems architect for an insurance company, admits achieving the desired granularity for each area of the business is a work in progress, He recommends companies creating an new service architecture to consider precisely how these services will be used and by whom.

“It’s a little bit of a learning experience, and a little more of an art than a science, and it takes time to get to the right level of granularity,” he says."