In-Depth
The Mainframe Capacity Conundrum Revisited
zLinux or Big Iron J2EE workloads perform better and are cheaper than their RISC- or Intel-based alternatives
- By Stephen Swoyer
- July 18, 2006
IBM Corp. likes to talk up the surging growth in mainframe MIPS shipments as one indicator that Big Iron is back. Total MIPS shipments grew by more than 20 percent in Q4 of last year, and by 22 percent in the first quarter of 2006.
If this impressive MIPS growth is any indication of the mainframe’s new vitality, Big Iron is indeed back—big time. Better still, IBM officials say, much of the growth in MIPS is earmarked to support new workloads—technologies such as zLinux, WebSphere-on-z/OS, and—more recently—data processing (via Big Blue’s new zSeries Integrated Information Processor). For mainframe pros accustomed to obsessing over the future of their beloved platform, that’s great news, right?
Jim Stallings, GM of IBM’s System z business seems to think so. Big Blue’s specialty processor engines have been a catalyst for MIPS growth, he says, and with zIIP having just become generally available—and with additional specialty engines (for data center security, among other workloads) on the drawing board—MIPS shipments should continue to surge.
“The whole reason we’re doing all this [offering mainframe specialty engines] is to make it easy for the customer to move the workloads [back to Big Iron]. Farmer’s Insurance is a customer that you may have heard about. They had about 1,700 MIPS of WebSphere. They installed zAAP and it reduced their MIPS usage by about 700,” he explains.
The flip side to surging MIPS growth—specifically to surging MIPS growth earmarked for new workloads—is that such workloads themselves aren’t necessarily as efficient as the bread-and-butter COBOL or Assembly workloads that have been historically been the workhorses of mainframe systems. After all, zLinux and J2EE workloads aren’t as miserly when it comes to mainframe MIPS resources as their predecessors. One upshot of this, skeptics charge, is that IBM can’t price new workloads at a native z/OS premium for the simple reason that customers aren’t getting as much bang for their buck when they opt to run next-gen workloads on a new mainframe CMOS.
“I think the doubling of MIPS, the growth of the MIPS capacity of the machines, that is encouraging—but the software that consumes the new MIPS is typically in the WebSphere and the Linux workloads,” argues Andre den Haan, CIO of mainframe ISV Seagull Software Inc. “To support 1,000 users, you have X [times] the number of MIPS that you require in a traditional workload environment. It is not unrealistic that you might need at least 20 times as many MIPS to support the same number of users in a Java environment.”
Den Haan isn’t a knee-jerk skeptic. He’s encouraged by the mainframe’s resurgence—what’s good for Big Blue is, in most cases, good for ISVs such as Seagull, too—and thinks zLinux, mainframe J2EE, and zIIP have helped buttress the mainframe’s rebirth. Besides, den Haan concedes, the mainframe capacity conundrum is a moot issue. It’s not as if IBM is pulling a Big Iron bait-and-switch on its customers: Big Blue sells its specialty engines (IFLs, zAAP, zIIP) for a fraction of the cost of what it charges for full-bore z/OS capacity, so customers are—at the very least—getting what they’re paying for.
In the same way, den Haan points out, it’s also understandable why IBM prices native z/OS capacity at a premium.
“I do understand why IBM prices the new workload-type things more attractively, and I do understand why IBM still charges a lot of money for their traditional workloads. At the end of the day, I think what counts is the value of the software itself. To give you an example, I have yet to see one situation where a large-scale application with 40,000 workstations is running on either a WebSphere or a Linux backend. They just do not exist, I think,” he comments.
There’s a further wrinkle here, argues IBM’s Stallings. Even if next-gen workloads aren’t as efficient as their COBOL or Assembly forebears—a point which Stallings refuses to concede—they’re considerably more efficient, in a number of ways, than workloads running on RISC- or Intel-based systems.
“The problem with Intel and RISC with these application servers is utilization, and you’re paying for a lot of headroom that you don’t even use, so they may for short spike periods [of intense demand] perform better. In the long haul, the same workloads running on the mainframe perform better and are cheaper,” he notes, arguing that the virtualization capabilities of both RISC and Intel systems are still comparatively immature, at least relative to what’s possible on mainframe hardware.
The point, Stallings says, is that next-gen workloads offer considerable value, whether you’re talking about modernizing existing mainframe workloads or bringing RISC- or Intel-based workloads back home again.
“Power costs [for distributed application servers] are a very direct issue for a lot of our customers. It’s starting to factor into how they make their buying decisions. I ran into a very large customer [with] a huge Intel server environment and they’re beginning to measure the acquisition costs [of additional Intel servers] in kilowatts per server,” he says. “In the data center, every square foot is air-conditioned, so there’s the cost of the real estate, but then there’s the cost to power and cool all of that. In one-fourth of the footprint, you can run the same workload [on a mainframe]. The price of [data center] real estate is not going down, it’s going up. Data centers are getting larger and larger. As customers acquire more and more servers, you’ve got to cool them.”
About the Author
Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.