A year ago in ADT: Grid computing

Grid computing could be described as a large combine of computers noodling on a great problem. The computer grid is here likened to the electric power grid that routes power around the nation as needs require. This grid is a collection of distributed computing resources that appear as one virtual system usually available over a large networkThere is a lot about Grid computing that is familiar.

»Grid computing
»Rich clients
»Wireless computing
»Agents, et al
»Burke's laws
Multiprocessor parallel computers go back a ways. Companies like Convex, Perkins-Elmer, Concurrent, Kendall Square, Multiflow, Encore and Thinking Machines all at one time or another gained a lot of attention, but their computers' uses were largely limited to scientific computing arenas.

The 1980s' parallel prophets quickly tapped demand, without becoming too-too visible in the commercial computing realm. More highly distributed approaches, such as the Linda OS, go back to the early 1980s. These were fairly academic efforts, although the 1990s SETI@home program makes use of unused PC cycles to study deep-space data that may disclose life in other solar systems.

What is new here is the idea of connecting computing services over the Internet, and even that is familiar in the commercial realm if one thinks back just a couple of years ago to the brief venture capital-devouring advent of the Application Service Provider (ASP). Today, the companies pushing the Grid include Sun, Dell, TME, Data Synapse, Intel, Platform, Entropia and others, but none has been a more vivid advocate to date than IBM, which sometimes refers to the Grid as "pay-as-you-go" computing.

IBM, scientists and other Grid computing advocates hope Grid will make its mark by solving previously intractable problems. Several examples come from Britain, not the least of which is a data grid proposed by Oxford University. Oxford recently joined with IBM and the British government to create a country-wide computing Grid that will help doctors diagnose and treat breast cancer. As conceived, the project would be one of the first Grids to be composed entirely of commercial technology.

The Oxford program is a bit different than Grid as shown in SETI; it represents more of a data grid than a compute grid. The system will feed breast cancer patient data into a federated database residing at four U.K. computing centers, with more machines to come online later.

The details of the federation are not worked out at this point. One could guess the system would rely on XML to integrate diverse data, but it would not be proper to call it a big "XML Grid," said an individual close to the projects.

Working with commercial vendors is key to this medical project, said Oxford Professor Paul Jeffreys.

"This project is application-driven. We are not just building a [one-off] infrastructure. We hope to produce generic middleware that is widely applicable," said Jeffreys, director, Oxford University Computing Services. "The applications would not be a success if they were not fully connected with industry in the process."

This project has serious implications, and not just for Grid computing, said Dave Watson, program director at IBM's Hersely, U.K. services and technology group.

"This is a really good project to do. It has the potential to use a lot of our technology in a powerful way. And from a social point of view, it has the potential to solve some problems and make life better for some people. It has potential beyond the radiologist," noted Watson.

Grid computing coda: The value in Grid computing for some time to come will be obtained by big operations that need to do a lot of computations while saving money. Academic computer users ready to help create unique applications, and commercial scientific users strapped for computer center power, will lead the way. General corporate server infrastructure may at the moment be underutilized; if another big build-out is required, the Grid will loom as an alternative. IBM, Sun and Dell are in the forefront here; smaller players that do well will quickly become takeover targets, as large-scale success would be a capital-intensive, double-edged sword. Standards must take. Much middleware needs to be built. "Pay-as-you-go" requires infrastructure changes, as well as changes in the thinking of CIOs. The competition is the next truckload of servers that arrive at the loading dock.

Prev | Next

Intro | Grid computing | GUI | MDA | Wireless | Agents etc | Laws


Upcoming Events


Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.