News
Virtual Systems of Tomorrow Could Take Cues from Today's Mainframes
- By Stephen Swoyer
- September 2, 2008
Given its many turnkey virtues -- e.g., the ability to rapidly deploy new OS
or application instances, or to provision additional OS or application resources
to address demand requirements -- it's tempting to think of virtualization as
a turnkey proposition: You buy a hypervisor, you install it on your hardware,
and
voila! You've virtualized your infrastructure.
Not quite. Virtualization is both a software and a hardware play, but -- chiefly
because software hypervisors from VMware, Citrix or even Microsoft seem to garner
the lion's share of attention -- you almost wouldn't know it. Nonetheless, experts
say, the successful realization of an infrastructure in which all (or almost
all) IT assets get treated as virtual resources will require a big helping hand
on the hardware side, too.
OEMs, thankfully, are already building the big, fast virtual servers of tomorrow.
To a surprising degree, tomorrow's highly virtualized systems sound like (and
might actually look like) the most highly virtualized platform of today: IBM's
System z mainframe.
Virtualization Changes Everything
Pervasive virtualization will change just about everything, said Gordon Haff,
a principal IT advisor with consultancy Illuminata, starting first and foremost
with how shops go about operating or optimizing their datacenters. That's because
virtualization in its many flavors (e.g., server, storage, network) tends to
encourage the shifting of workloads from one physical resource to another.
To do this, or to do it more effectively than is possible today, virtualization
must also change how these systems are designed, according to Haff.
"[A] system intended to run a dynamic mix of mobile workloads doesn't
necessarily have the same characteristics as one oriented toward running a modest
number of static applications," he said. "We're seeing different tradeoffs
in system specifications -- such as increased memory capacities and a requirement
for processors with virtualization assists built into their instruction sets
-- as a result."
The upshot, Haff continued, is truly the stuff of sea change: System design
is shifting away from physical conceptual models and toward a virtual model.
That's as it should be, according to Haff.
"[V]irtualization changes how physical servers are used," he said.
"[T]hat's basically the point. It lets you run a variety of workloads on
a single system, increase hardware utilization, and shift around workloads in
response to changes in demand," Haff continued. "Thus, it would hardly
be surprising if servers optimized for virtualization didn't necessarily mimic
designs favored for running a modest number -- or even just one -- application."
Even so, he conceded, the virtual-friendly servers of today still look a lot
like their predecessors -- for good reason. "The differences aren't necessarily
dramatic. They don't -- so far -- result in servers that are unrecognizable,
or that aren't also suitable for running un-virtualized workloads as well,"
Haff said, "but we're clearly starting to see changes -- both in the way
that virtualization is being woven more tightly into the system's fabric and
in the way that other aspects of the hardware are evolving in response to the
differing -- and often more demanding -- requirements of virtualized workloads."
Borrowing from Big Iron
One refinement that we're already starting to see is the embedded hypervisor:
VMware, for example, touts ESXi,
a hypervisor that runs on 32MB of Flash memory; ditto for Citrix and its XenExpress
technology, which it got as part of XenSource.
It's an idea with mainframe-esque roots, according to Haff, who cited as an
example Start Interpretive Execution (SIE), a specialized virtualization instruction
that IBM first enabled on its System/370 mainframes back in the early '80s.
The embedded hypervisors of today -- which many x86 hardware OEMs now offer
as available options -- are somewhat similar, according to Haff.
"The idea is that you buy a server with an embedded hypervisor sitting
somewhere on a flash memory card or a USB key," he said. "Booting
the server for the first time then kicks off a menu-driven configuration process
that would end up with an installed hypervisor ready for guest operating systems
to be loaded on top. Effectively, the base platform exposed to the administrator
becomes the hypervisor rather than the hardware."
It's in this respect and others that the ideal "virtual-ready" server
of tomorrow might look a lot like an existing archetype -- the old mainframe.
Of course, even with embedded hypervisor support, x86 virtualization is still
a far cry from the mainframe gold standard, Haff pointed out.
"[W]hen a vendor controls the whole technology stack [as IBM does] from
processor to operating system, that control can be leveraged to make virtualization
really hum," noted Haff, who added that "System z remains the gold
standard in this regard," although Big Blue's POWER systems -- with their
PowerVM implementation -- aren't slouches, either, thanks to their hypervisor
decrementer (HDECR) technology, which enables mainframe-like granularity.
Ditto for the question of system or image size: The virtual-ready servers of
tomorrow could resemble -- in sheer mass, if not internal CMOS, at least --
the highly virtualized Big Iron systems of today. The sheer horsepower of today's
systems all but demands that resources be virtualized, Haff argued. Many SMP
systems are simply too powerful to play host to the discrete applications or
services that once comprised their raisons-d'etre. Which begs a question:
Why buy big SMP servers at all? If it's truly a virtual world, why not just
go with dozens (or even hundreds) of comparatively inexpensive blades?
Here, too, the mainframe leads by example. "[T]here are offsetting advantages
to using fewer, but larger, physical servers," Haff said. "Big Iron
has customized internal connections that let a system communicate internally
at memory access speeds. Such interconnects are more expensive than the networks
used to coordinate a collection of scale-out boxes. It's also several orders
of magnitude faster...than networking gear."
Even in virtual space, large trumps small. "Specialized high-end servers
carry premium price tags, and administrators may need to learn new tools and
acquire some new skills to operate them," Haff conceded. "[O]ther
things being equal, large is better than small when it comes to hardware for
virtualized environments."
Other virtual-ready accoutrements include support for considerably more memory
and fatter pipes. Big Commodity Memory is coming: High-speed memory specialist
Hynix recently demonstrated three 16GB DDR3 memory modules running in a tri-channel
configuration (for a total of 48GB per channel, a 300 percent improvement over
previous memory densities). Furthermore, pipes don't get much fatter than virtualized
Ethernet, Haff noted.
It's in this respect, too, that the virtual server of tomorrow smacks of the
mainframe system of today. "[L]arge numbers of Linux guests [running on
z/VM] don't need to communicate with each other over a standard network interface.
Oh, they think that's what they are doing. However...the traffic never enters
any physical networking hardware," Haff said.
These days, x86 virtualization players are employing similar design tactics.
"VMware and XenServer provide analogous capabilities on x86 servers,"
Haff said. "In addition to traversing interconnects that are faster than
a 'real' network, this virtual Ethernet can do some other optimizations,"
including the elimination of CPU-intensive TCP checksum processing.
About the Author
Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.