In-Depth
Application servers unmasked
- By Max P. Grasso, Bharat Gogia , Hoa Nguyen
- June 4, 2001
Champions of application servers promise that the technology can ease the process
of developing complex electronic commerce applications. It is a claim that IT
development managers have heard before, so many are prudently evaluating the
technology before taking the claims at face value.
Many of today's IT development managers lived through the broken promises
of client/server and rapid application development. A lack of client/server
infrastructure and experience exasperated the situation, especially as users
waited to receive new application executables or updated application versions.
Emerging application architectures -- in particular, those involving e-business
services -- promote a multitiered and thin-client approach built to conquer
the problems brought on by client/server. The current prototypical architecture
has a user with a browser communicating with a Web server that, in turn, delegates
to one or more servers the task of managing and executing business logic from
numerous data sources scattered across various platforms.
The need for servers to manage business logic created an opportunity for application
servers to become a key infrastructure element in a multitiered, distributed
architecture. Application servers are ideally suited for the Internet because
they handle all of the attributes required of an e-business service, such as
availability, scalability, security and integrity, and yet are flexible enough
to build business functionality.
In theory, application servers separate business functions from system functions
along well-defined lines. Organizations should therefore be able to build business
components and independently choose the application server on which to deploy
them. In practice, however, the choice is restricted to those application servers
that support the component model used for the business logic.
Application servers typically provide services as interfaces defined within
the context of an accepted component infrastructure, either the Enterprise JavaBeans
(EJBs) model from Mountain View, Calif.-based Sun Microsystems Inc. or the Component
Object Model (COM) from Microsoft Corp. Each of these models is able to handle
business functionality defined as components within the same infrastructure.
The combined use of Java and CORBA has come to provide a simplified, distributed
component model for many a distributed project. In some ways, it may be the
most widely used component model, but it lacks the completeness of EJB and COM
and also suffers from some contradicting characteristics.
Our expectation is that the number of application servers in the EJB camp
will grow, and individual products will be differentiated through system services,
special purpose services and interfaces. Microsoft, on the other hand, has embedded
its COM application server -- Microsoft Transaction Server (MTS) -- in the Windows
platform, effectively killing any competition. The only Windows alternative
-- the Jaguar offering from Emeryville, Calif.-based Sybase Inc. -- is now part
of Sybase's Enterprise Application Server, which supports both the COM and EJB
component models. As long as the Windows platform thrives, however, there will
be many companies besides Microsoft adding services and interfaces to MTS.
Understanding application servers
The definition of the application server category has long been shrouded in
fog. This is because the concept has evolved over the past few years, with the
term being applied to a number of very different products. Currently, two major
definitions of the category are in use. The more generic definition counts an
application server as any product that handles business logic and is capable
of service requests over the network, directly or through a Web server. This
definition is the source of much misinterpretation as most of the product services
are left unspecified.
The more restrictive and meaningful definition can be found by combining the
EJB specification with Microsoft's MTS technology and using the broad spectrum
of services each considers as defining the services for the whole category.
Thus, a description of the application server product category essentially turns
into a recounting of a fairly large number of services. To make things easier,
we will roughly classify these services as Presentation Services, Distributed
Object Services, Transaction Services, Security Services, Integration Services
and Deployment Services.
Descriptions of each service follow, though development managers must note
that some services may fit into multiple classes. For instance, Declarative
Security Services straddle Security Services and Deployment Services.
Presentation Services
Developing a thin-client Internet application means mixing static and dynamic
information, and maintaining client-side state on top of stateless protocols,
in particular HTTP. While it was common for application developers to build
proprietary mechanisms for such work a few years ago, Web servers and application
servers have since taken on the task.
In addition, Presentation Services deliver content to the user interface,
and are responsible for tying the user interface to business logic and other
system services.
While some application servers provide an integrated development environment
(IDE) to ease the development of user interfaces, others require development
of the user interface on a separate IDE. There are also some servers that provide
drag-and-drop utilities to allow developers to quickly assemble user interfaces
and tie them to business logic.
These services, due to their association with the Web and the HTML layout,
are currently being reclassified as extensions of the Web server functionality
and are moving out of the core application server offerings.
Distributed Object Services
Distributed Object Services are the services closest to being core offerings.
Because the current state of computing demands that business functionality be
encapsulated into objects, developers now must define protocols that handle
object life-cycle events and make objects available on the network.
Distributed component models are targeted specifically at these issues. They
first define the concept of a component, how it should be programmed and what
events it is expected to handle. The model then defines how components are accessed
in a distributed environment.
Component models can be easily projected into blueprints of distributed systems
architectures, and can build the business logic so that it fits into any such
architecture.
In each of these models, the concept of a component matches an equally crisp
concept of a container. Containers interact with components according to the
protocol defined by the model. They activate components, instantiate objects
and relay network requests. They also keep request queues, and manage threads
and thread pools for request processing.
While the concept of a container is well-defined, the implementation of the
container functionality is fertile ground for adding value; it is in this field
that application server vendors contend for market share. Thus, today's application
servers are first and foremost containers according to some defined distributed
component model. All other services are provided on top of this base functionality.
Three distributed component models are widely accepted today: Microsoft's
COM, EJB, and a model that combines Java with the CORBA object request broker
standard of the Object Management Group (OMG), Framingham, Mass. We will take
a brief look at each.
Microsoft's COM -- The COM specification fills Microsoft's need for a binary
component model. The first widely available component model, COM developed into
a full-fledged distributed component model with the advent of the Distributed
Component Object Model (DCOM).
Microsoft incorporated transaction monitoring facilities, as well as Security
and Deployment Services, into the COM model, giving birth to the first widely
available object monitor, MTS. MTS allows organizations to build enterprise
systems by developing the business logic as MTS-aware COM components. These
components can be easily built with any of the Microsoft development environments
-- Visual C++, Visual J++ or Visual Basic. While Visual C++ is required to build
components that can link with specific legacy systems, Microsoft still touts
Visual Basic as the environment of choice for ease of use. Microsoft's Visual
Java toolset is currently in limbo due to the suit with Sun.
Given its usefulness in building and managing enterprise applications, and
after a short separate life as a standalone product, MTS was quickly bundled
with the Windows NT operating system.
EJB -- Taking a cue from Microsoft, Sun created the JavaBeans component model
utilizing contents that could gain wide industry support. The EJB model builds
on the JavaBeans component model, borrows CORBA for distributed computing, and
defines Transaction Monitoring, Security and Deployment Services along the trails
blazed by Microsoft.
Though a follower in terms of chronology, the EJB model has earned unmatched
support in the industry and has taken the technology lead in some areas.
The development of EJB components is usually done through one of many Java
development environments that have come out over the last two years. Vendors
of EJB-based application servers are either relying on alliances with IDE vendors
or providing their own tools to make the building of components as smooth as
the building of standalone functionality. The two different approaches can be
seen in Symantec's VisualCafé or BEA's Web-Logic on one hand, and Progress'
Apptivity or IBM's WebSphere on the other.
Java/CORBA -- As the battle between EJB and COM continues, CORBA appears well
positioned to become a mature intermediary and the trusted communicator of enterprise
data.
Java seems to have some difficulties extending beyond the world of the Java
Virtual Machine (JVM), while COM has its own platform restrictions. CORBA can
span both worlds and most computing platforms. For example, the standard Java2
VM now contains a CORBA Object Request Broker, and the EJB specification delegates
to CORBA the communication of data. CORBA vendors provide bridges to COM, though
application-specific bridging is easily achieved using any CORBA offering on
the Windows platform. Even Microsoft recently promised to provide a COM-CORBA
bridge.
Despite all of this, CORBA has still failed to produce a set of native application
servers. The problem with CORBA lies in its strengths. As CORBA developers focus
on interplatform, environment-neutral data communications, the technology is
all but painted into a corner when it comes to defining a component model.
By combining CORBA's distributed computing with Java's component model, one
can indeed obtain a simple distributed component model. Because of its simplicity,
and the fact that it is born of the synergy of two well-accepted technologies,
it is very widely used nowadays. Yet it suffers many limitations because it
breaks the neutrality promises of CORBA and lacks the completeness of EJB, which
in some ways seems to subsume it.
Transaction Services
The scalability requirements of many e-business systems -- mainly due to the
large numbers of clients that must be handled -- require that resources be reused,
pooled and maintained. Prior to the advent of objects, transaction monitors
handled such duties. The handling of the object life cycle now resides in the
realm of Object Monitoring Services.
Object monitoring -- All application servers provide some sort of object monitoring.
MTS was the first to implement the concept of object, while EJB application
servers now do so as prescribed by the EJB standard.
Roughly speaking, object monitoring consists of carefully giving out references
to objects and reusing objects across many clients according to well-defined
rules. This strategy should reduce the number of objects in the system and produce
savings in the costly object creation and destruction phases, which include
the loading and saving of an object's state from persistent storage.
Transaction management -- Most enterprise systems need to reliably store and
transfer data. Such systems usually rely on one or more database engines for
storage, and remote method invocation or message queuing for data transfers.
Products providing such functionality are extremely reliable, but reliability
and consistency break down when the functionality of the products is glued together
by business logic.
A common example is when enterprise data needs to be updated in a consistent
fashion across multiple databases. At fulfillment time, a merchant must simultaneously
update both the inventory and order database. When the order is shipped somewhere
in the system, a piece of business logic decreases the number of available units
by one, marks the order as fulfilled and inserts the tracking data. To avoid
introducing data consistency issues, these updates must be managed as a single
transaction.
Not nearly enough organizations around the world use transaction managers
to keep their data consistent. Companies often rely on offline reconciliation
of updates, executed by an overnight batch process or another special-purpose
mechanism involving logging and manual interventions. However, all of these
ingenious mechanisms are not only difficult to validate, they are unable to
cope with the stringent online requirements of e-business.
As organizations begin to automate operations and incorporate the Internet
into internal processes, expect to see less ad hoc transaction processing and
more reliance on well-defined distributed transaction semantics and, of course,
transaction managers.
Application server vendors recognize the need for transaction management,
and business logic containers ideally fill this need. Sun, IBM and others are
moving quickly to incorporate transaction management into EJB computing platforms.
Microsoft has also moved swiftly to incorporate its transaction management into
the Windows NT offering.
Security Services
Also common to all distributed enterprise systems is the need to secure valuable
data that is accessible only to authorized users. A number of distributed security
models have been developed in the past decade with basic characteristics such
as authentication and authorization. Authentication refers to the presence of
an authentication protocol that identifies the requesting party, while authorization
grants access only if the requestor's identity is included in a specific list
(the access control list) or if the requestor can assume a specific role (role-based
authorization).
Infrastructures based on such models have been widely deployed for user authentication
for operating system and network access. The most widely available is the all-pervasive
NT domain security. The Distributed Computing Environment (DCE) provides an
alternative infrastructure. These infrastructures were built to incorporate
enterprise intranets, but have been shown to be inadequate in the face of Internet-based
systems. Specifically, e-business applications deal with a user base much larger
and more dynamic than any organization's employee population. Such applications
may also have to handle communication across enterprises and, in general, across
entities not controlled by any single security authority.
E-business systems initially took on the task of building their own security
infrastructures, in most cases with an application-specific slant. But there
is an obvious need to have these security infrastructures reliable and reusable
across different applications, as well as manageable in a uniform manner. Application
servers have stepped up to the plate to provide such capabilities, providing
Declarative Security Services that are relatively easy to use and Programmatic
Services that can at the same time grow to support the specific needs of an
application.
Declarative Services are based on the fact that application servers are aware
of the component attributes of the business logic (activation, interfaces and
methods) and can secure them without programming. A single user interface can
administer both the security of the application server and the security of all
business components. At times, declarative security is not the right fit and
application-specific logic can better handle access control.
Application servers provide programmatic interfaces to let business logic
take over security. Consider the case of a system that handles bank accounts.
The application logic can verify that the initiator of a transaction corresponds
to the account owner before allowing access. In this case, the application-specific
control is obviously simpler than anything based on components or interfaces.
However, it does require that the application server make the identity of the
transaction initiator available through programmatic interfaces.
Integration Services
Distributed enterprise applications rarely stand by themselves. Rather they
usually reach inside a company's data vault and other internal business support
systems, such as mainframe applications running under CICS, Enterprise Resource
Planning (ERP) packages, IBM Tuxedo-based transaction systems, stored procedure
packages on an SQL DBMS, or CORBA-based distributed services. While specific
hooks or communication layers can be developed to attach to these systems, the
task may be quite complex (take the case of transferring a transaction context
from an EJB into an OTS-based CORBA system).
The baseline for all application servers is support for communication with
SQL database systems. Added value can include connection pooling services. Vendors
are currently scrambling to provide other integration services to differentiating
application server products. As noted, prospective application server users
should scrutinize such features. Expect to see more architectural standardization,
as it is in the interest of application server vendors to allow easy access
to the platform by third parties. For example, Sun has defined the Connector
architecture for the Java 2 Enterprise Edition platform, a reference implementation
of an EJB environment. The jury is still out on that move, but expect reaction
within a year.
Deployment Services
Deployment Services are used to deploy applications, and include Directory
Services and Availability Services such as load-balancing and fail-over.
Directory Services are used by components to find other components and can
also be used by clients to find servers. There are a number of standard interfaces
that programmers can use to register or find a component. The most widely available
are the Java Naming and Directory Interface (JNDI), the Active Directory Service
Interface and the CORBA Naming Service. The first two interfaces are usually
a layer over the universally accepted Lightweight Directory Access Protocol
(LDAP), which can talk to most available directory products. For fast access
and ease of deployment, some application servers also provide an internal directory
implementation.
Availability Services allow deployment to be quickly adapted to a growing
number of users while maintaining the quality of service within a desired parameter.
This is a vital requirement for e-business applications, whose success is often
measured by the number of users they serve and are rarely positioned to control
their user base. The simplest load-balancing and fail-over mechanism relies
on the Internet Domain Name System (DNS) to randomly distribute users among
servers. This is what Web farms often do. A slightly more sophisticated approach
relies on hardware communication re-directors.
To provide better load-balancing and fail-over services some application servers
use the Directory Service, advanced programming models (special-purpose stubs
and handling of component state) and out-of-band communication. In this area,
application servers differ wildly and care must be taken to fully understand
the limits. For instance, transparent fail-over is often qualified by restrictive
conditions on the business logic, as an all-purpose fail-over mechanism cripples
performance.
The programming model required by an application server for specific Availability
Services can also impact the application architecture. It is important that
users investigate such models at the very beginning of a project or the whole
application can become trapped in a scalability corner. The description of the
services provided by application servers is useful in structuring the product
selection.
Choosing your application server
The application server to be used within an organization must be carefully
selected so as to justify the financial investment and maximize the resulting
business benefit. The first element to consider, as it is strongly tied to the
organization's available staff resources, is the programming model used to build
the application logic. Choose an EJB-based application server and, if your development
team is mostly versed in COM and Visual Basic, you will find yourself having
to deal with a fairly steep learning curve. The advantage of using an application
server is that it allows development resources to focus on the core business
logic; it therefore makes sense to choose an application server whose programming
model is similar to the model used within the organization. Outsourcing expertise
offers partial relief, but it makes sense only if it is considered in conjunction
with a knowledge transfer effort.
The next element to consider is the specific need to communicate with other
internal systems, be they databases, ERP systems or special-purpose systems.
Finding an application server with the appropriate connectors will greatly facilitate
integrating new business logic with existing systems and processes. For instance,
if there is a need to connect to Tuxedo-based transaction systems, BEA's WebLogic
offering allows as seamless a service as you can get. Should your business functionality
require extending transactions to the mainframe, IBM's WebSphere Enterprise
Server would make the short list. In general, an organization's connectivity
requirements typically define the short list of servers.
It is also important that the selection be done with the enterprise in mind.
As much as a company would like to use different application servers according
to project needs, application servers are complex products with distinct learning
curves for development, deployment and administration. For example, the issue
of demarcating transactions and securing the system is the most complex at every
stage.
In addition, application servers must be evaluated for reliance on accepted
component models with support for object monitoring. The most widely available
ones are currently the overly mentioned EJB and COM/MTS models. Application
servers with a unique model for encapsulating business logic are either hindered
from providing sophisticated services (scalability, fault-tolerance and the
like) or must be further evolved with the learning curve thus forced on application
developers.
Finally, application servers are used for their ability to scale and, in general,
to provide high availability. Performance ratings, load-balancing and clustering
features are important when trying to manage the explosive and often unexpected
growth associated with e-business applications. Should the application server
not be able to support such growth, the organization faces the burden of rearranging
functionality in an attempt to scale the application.
Evolution and future
Some organizations have spent considerable resources building in-house application
frameworks to find that the burden of maintenance is more costly than expected.
These frameworks often provide only a small portion of the services required
by in-house systems, and companies may find themselves having to look for application-specific
alternatives or invest in more resources to extend the framework.
Others have bought and adopted commercial application frameworks from suppliers
like Forté Software. Because different commercial application frameworks
have no common infrastructure and no common model for the building of business
logic, these frameworks are often viewed as constrictive commitments to software
companies with an unclear future. A firm that buys a commercial application
framework is committed to building its business logic according to the framework's
rules. If the framework's vendor were to go out of business, the company would
find itself having to rebuild its business logic according to different rules.
With the advent of the Internet, Web servers were used as e-business application
platforms. While Web servers have proven to be excellent platforms to serve
static data, they have very little support for the building of business logic.
Applications were built using unscalable CGI platforms or they were built against
a Web server's proprietary API. In either case, high-level services were not
available, and organizations once again had to choose between building their
own framework or purchasing one and committing themselves to it.
For the first time, application servers offer a framework and a number of
high-level services that let an organization build business logic without fear
of lock-in, while also limiting the risk of software obsolescence. This is only
possible because the underlying component systems and interfaces are standardized.
There is also room for the development of a lively vertical component market
that would help organizations to concentrate on building strategic business
components; in other words, the logic that differentiates it from its competitors.
Creating an e-business system could then be as easy as combining such strategic
components with third-part.