Dev Trends

The concept of evolution may be misunderstood and sometimes misapplied, but in general it is accepted by most people as the primary mechanism by which biological diversity and change occurs. While biological evolution is a natural phenomenon, the under lying concepts are applicable within other domains. In fact, it could be applied (albeit loosely) to the growth of the Internet and World-Wide Web (WWW). Let's consider the basic evolutionary process and some of the ramifications it has for the 'Net.

Evolutionary processes really have no beginning or end (except extinction). However, new forms appear and diversify roughly according to a series of four steps. First, some event occurs which creates a new entity with potential for growth and persistence. In some cases, something happens to endow a class of marginal entities with the same potential.

Now consider the Internet and WWW. The Internet was created when the Advanced Research Projects Agency (ARPANET), an experimental government funded communication network for governmental R&D, was standardized completely on TCP/IP as the networks communication protocol and was opened up to the educational and research communities. At this time, many of the services common to the Net such as E-mail, TELNET and FTP were developed.

Limited only to educational institutes and governmental research and development sites, the Internet was in a sense a marginalized technology. Two factors "opened up" the Internet to become the ubiquitous, international communication pipeline it is today. First, the National Science Foundation, the manager of the Internet, made the Net available to commercial traffic. Second, CERN (the European Laboratory for Particle Physics) began to develop technologies that would allow the viewing of linked hypertext documents containing graphics, sounds and video. This assortment of Internet concepts and service technology, known collectively at CERN as the WWW project, has been subsequently commercialized. HTTP, a communica- tions protocol that allows hypertext documents to be retrieved, is one example of the technology.

Expansion and growth

During the second stage of the evolutionary process, the new or altered entities begin to grow into and thrive within a particular niche. All possible niches of which the entity is capable of entering are filled. This evolutionary step, too, has its Internet/WWW counterpart. Unless you have spent the last two years in the tundra collecting life-bearing Martian meteorites, there can be doubt that Internet/Web interactions, consisting of the accessing and reading of collections of hyperlinked HTML pages, have permeated all aspects of the computing landscape. For example, most classes of software (and hardware) have added functionality to support this type of Internet interaction model.

During the third stage of biological evolution, inhomogenieties of circumstance and environment combine to stimulate diversity. For the Internet and the Web, this is the evolutionary stage in which we find ourselves today. Technology has evolved to the point where it is possible to animate Web pages or to access databases via the Web. No longer are Internet/Web interactions limited to examining static HTML documents.

Extinction of the "simple" Web

During the fourth evolutionary step, specialization of an entity's form and function expands. Alternatively, a new entity comes along which is better adapted to the existing environment, replacing the existing entities. In either case, new functionality (or entities) makes for differential survival rates.

Some pundits are stating the "simple" Web, that is, the document-centric, static Web built upon the technical troika of browsers, HTTP, and HTML, will give way to better technology. The usual target for fossilhood is HTTP, the protocol for carrying hypertext documents. The replacements include alternative types of middleware. Although DCE RPCs, message-oriented middleware, transaction monitors, ActiveX and other approaches are available as middleware alternatives to HTTP, the leading contender for HTTP replacement is IIOP.

What is IIOP?

You've seen the headlines in the trade press. "Oracle selects IIOP over Microsoft's Distributed Common Object Model (DCOM) for its Web server", "Netscape standardizes on IIOP for Both Browser and Server," and so on. But what exactly is IIOP, and why does it matter?

First, IIOP stands for Internet Inter-ORB Protocol. It specifies how Inter-ORB messaging occurs over TCP/IP. Recall that version 1 of the Common Object Request Broker Architecture (Corba) specified ORB interface portability via the Interface Definition Language (IDL), but did not specify a single protocol to be used for ORB interaction over a network. As a result, interoperability between ORBs from different vendors was severely limited. Version 2.0 of the Corba 2.0 specification defined interoperability between objects across various ORB implementations using the OMG's General Inter-ORB Protocol (GIOP). GIOP handles the mapping of ORB messages across various network protocols. One specific GIOP implementation, IIOP, maps GIOP over TCP/IP. With Corba 2.0, the OMG provided for ORB interoperability via mandated IIOP compliance.

Since TCP/IP is the protocol of the Web, IIOP effectively opens up distributed object technology, formally limited to proprietary LANs and WANs, to the boundless frontiers of the Web (or corporate Intranets). Equally important, IIOP does not limit ORB interaction only to TCP/IP pipes. The OMG has also specified the Environment Specific Inter-ORB Protocol (ESIOP) for passing GIOP-compliant messages over proprietary protocols. The OMG has further stipulated that ESIOP ORBs must provide bridges to IIOP. Used either "natively" over TCP/IP, or with EISOP bridges, IIOP should boost the moribund distributed object market.

The IIOP approach

Most discussions of IIOP center on its applicability for building Web-enabled transactional applications. Following the technology adoption curve, organizations are beginning to Web-enable core business applications. Unlike earlier Web applications, which were largely limited to calling static HTML pages from browsers, the next generation of systems will be designed to interact with server back ends.

CGI to server gateways is the most common mechanism for providing database and application access from Web browsers. Many development tools also support the use of proprietary Web server APIs such as Netscape's NSAPI and Process Software/Microsoft's ISAPI in place of CGI. The Web server APIs do overcome the performance and security limitations of CGI. In either case, however, the user connects to the database or application server via a Web server proxy. Messages and database records are passed back and forth between browsers and Web servers via HMTL over HTTP, and then from there, to the appropriate database or application server.

A better way

Like the browser/CGI interaction model, Web browsers can connect over IIOP to back-end services indirectly through Web server proxies (some Web servers already have Corba service interfaces and more will do so in the future). In addition, browser clients can interact with database and application server back ends directly. For example, Java-enabled browsers can execute Java applets pulled down from Web servers as Corba-compliant ORBs and connect directly to database or application servers over IIOP.


Will IIOP and other types of middleware and mechanisms make HTTP obsolete? For many vendors and industry pundits the answer is "Yes." Their logic stream runs something like this ... "Most companies have progressed beyond simple authoring and adding animation to their Web applications. In the future, organizations will Web-enable more complex, transactional systems, and IIOP is a better protocol than HTTP from a performance (and ease-of-development) standpoint. Therefore, IIOP will replace HTTP as the primary transport mechanism for the Web."

Not likely. Have "fat client" systems and ODBC completely given way to multitiered, partitioned applications and high-performance, proprietary database drivers? Will they ever? How many DOS users are still out there? For most of us, the only four things we can count on in life are birth, death, taxes and DOS. Yet no one would argue that two-tier client/server is effective for large, complex distributed systems, or that ODBC offers better performance than native database drivers. DOS is not a better user interaction metaphor than Windows ... yet still it lives on.

Distributed object invocation over IIOP (via Java or not) will become a primary method for building browser-based transactional systems. For typical business systems then, IIOP will largely replace HTTP. But even in many of these systems, HTML over HTTP will be used to invoke that first Java applet or ORB. More importantly, the amount of static content on the Web will continue to increase dramatically, and the need for such material with it.

Forget the posturing and overstatement of the vendor community and other experts. Browsing for content will not disappear with the advent of Net-based transactional systems. For this type of information and interaction, the ubiquity and simplicity of HTTP is all that is required. IIOP will not replace HTTP for the simple reason that they occupy different niches. In evolutionary terms they do not compete ... one will not supersede the other because they operate in different competitive landscapes.