In-Depth

SOA’s Impact on SLAs: Trouble Ahead

Organizations may see service-enablement, and the next generation of SLAs, as a chance to improve the responsiveness and dynamism of their IT departments.

Before getting carried away with service-enabled euphoria, it’s worth taking stock of the existing IT landscape. Or, to put it another way, isn’t there a sense in which the service-enabled infrastructure of the future can and must resemble the tightly-coupled plumbing of today? It’ll be radically different, to be sure—but won’t there also be a continuum, of sorts, in certain important respects?

For example, will the next generation of service level agreements (SLA) resemble the very definite, clearly demarcated, accords of day? After all, current SLAs usually take the form of agreements—let’s call them contracts—between IT and internal line-of-business customers (finance, marketing, etc.), external customers, or third-party providers (for example, outsourcing services providers). But the SLAs of tomorrow will almost certainly involve the same constituencies—and many more.

These considerations beg a host of questions. For example, how does an SLA adequately encompass an application or service with multiple internal and external touchpoints? Will service-enablement force IT to develop more granular, concise, or responsive SLAs? How must customers re-imagine or otherwise rethink their SLAs in the age of SOA? More to the point, what kinds of accommodations or allowances must companies make for the immaturity of service-enabled infrastructures, and what kinds of new issues should they expect to encounter?

There are many more questions than answers, says Vince Re, chief architect with Computer Associates International Inc. (CA).

Re stresses that organizations should see service-enablement, and the next generation of SLAs, as a chance to improve the responsiveness and dynamism of their IT departments.

“Today, most IT organizations are kind of reactive. They run whatever configuration they think is best, and they monitor and report on whether they met their SLAs or not,” Re observes. “So it’s a real paradigm shift to doing that in a proactive way rather than a reactive way, knowing what the components are, knowing what the pieces are, and then dynamically mapping different resources into it. If you don’t meet your SLAs in that case, it’s not that you have something misconfigured, it’s that it’s probably time to expand your infrastructure.”

The transition from a fundamentally reactive to an essentially proactive infrastructure won’t happen overnight—not by a long shot.

For one thing, none of the major players can even agree on the relevant Web services management standards: CA, Hewlett-Packard Co. (HP), IBM Corp., and Oracle Corp., among others, back the Web Services Distributed Management (WSDM) standard, which they’ve submitted to OASIS. Intel Corp., Microsoft Corp., Sun Microsystems Inc., and a host of over vendors, on the other hand, support a rival standard, called Web Services Management (WS-Management). Both of these proposed standards are still gestating; if they’re anything like the open (or quasi-open) standards that preceded them, they’ll likely gestate for some time to come, too. The upshot, for bleeding-edge adopters, at least, is a slightly more complex approach to service-enablement.

“There’s one camp that believes that at some point you’ll see one clearly defined set of standards. There’s another camp that believes that in the real world you’ll always have multiple competing things, and the right approach is to have some kind of abstraction and kind of virtualize away from any standard and accommodate them in parallel,” explains CA’s Re. “Someone wanting to jump in today is going to want to wind up on the [former] side of the line.”

Bugaboos Remain the Same

In the meantime, some of today’s most vexing SLA bugaboos—e.g., buck-passing from one provider to another (in the case of SLAs with multiple organizational touchpoints); rampant petty fiefdoms, with each organizational unit struggling to maintain its autonomy, and so on—could possibly be exacerbated in the SOA environments of the future.

Take buck-passing. Organizations want the proverbial one throat to choke when things go wrong. It’s difficult enough to throttle a single gullet today, of course—but in the composite SOA-scape of the future, when “virtual” applications are assembled from a profusion of existing services (some internal, some external), it could be impossible. What will customers do when there’s a problem somewhere in the depths (or extremities) of their service-oriented plumbing—but each of their providers is playing musical pass-the-buck? Is it a problem for the line-of-business—which (if history is any indication) will successfully have resisted IT’s attempts to control what kinds of and how many services it consumes—or is it more properly the province of IT? Who decides, and why?

There aren’t any definite answers, experts say. But to a limited extent, organizations are already grappling with these and other issues.

Mark Eshelby, quality solutions product manager for testing tools specialist Compuware Corp., points to the emergence of a new quasi-executive position that’s charged with coordinating, managing, and enforcing software quality across an organization—the quality manager. A quality manager, he says, helps get to the bottom of the buck passing and attempts to mitigate the turf-warring that invariably occur whenever software quality issues are identified. His or her oversight includes not just the line-of-business and core IT, but the data management group, network services group, and external providers, among others.

It’s exactly the kind of executive-sanctioned role that—with important modifications—may help bring order and harmony to SOA. “We’ve typically had resources in an organization that are responsible for running tests or building test scripts. And the quality manager is the person responsible for orchestrating that team’s work efforts. I would say it’s actually becoming very popular,” said Eshelby, in an interview last year.

Turf-warring could be the most intractable problem—in part because individual and organizational egos, feelings, and perceptions of power and value are at stake. But there’s also turf-warring of a different sort—that which occurs between and among software vendors. Just because service-enablement promises plug-and-play bliss doesn’t necessarily mean that ISVs or other service providers have to play ball.

Consider the case of a Web services-based credit-scoring system implemented some time ago by financial services giant Wells Fargo. The idea, says Jonathan House, an IT director with medical services company Amerisys, was that the Wells Fargo application could easily consume the services published by a host of different credit scoring providers. That didn’t turn out to be the case, however. “Each of the organizations that we interacted with had their own proprietary software for accessing credit-score information. None were compatible with each other, and none were Web-services based,” says House, a former programmer with Wells Fargo.

In some cases, he says, third-party providers have little or no incentive to accommodate the service-enablement agendas of their customers—even if the customer in question is a behemoth like Wells Fargo. “We built the ‘credit-score’ interface that the application used, along with four different implementations of that interface for each of the vendors we were working with. In two cases we asked the vendor to write the interface code for us, and offered to pay them to simplify our job, and in both cases we were turned down flat because they saw that the interface design made their service a commodity,” he says.

The Upside of Service Enablement

Industry watchers such as Re say service-enablement could help improve relations with line-of-business customers—in part by making IT much more responsive and (from the perspective of the line-of-business) intelligible.

“I think there’s promise for many whole new approaches to how you do the underlying IT things. Take this idea of ‘On Demand’ computing, this idea of dynamically mapping that stack of IT resources that you have to whatever your application requirements are, and doing that in not just a static way, but a dynamic way that changes minute by minute in terms of how well you’re meeting your service levels or not,” he comments. “If you have greater visibility into the applications, and you have that business process awareness, and you know that this Web service that just flowed through the system is part of this very critical business process, and I want to treat it a little differently, that is just an enormous step forward.”

About the Author

Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.