Columns

Development menus need more than Beans

How should I/S organizations select their application development tools? Today's politically correct answer is a tool suite that is enterprise-class, Web-enabled, COM-/DCOM- and/or Corba-compliant, and will soon support Enterprise JavaBeans (EJB). Yet are these facilities enough to support today's application development requirements? If the answer were yes, this would be a very short column.

SPG Analyst Services believes that the tool selection process needs to take two dimensions into account: the complexity of the application under development, and the development support facilities needed to effectively design, build and maintain that application (see Fig. 1). Within the context of these two dimensions, the first challenge for I/S organizations is to peel back the layers of promotional and marketing hype and uncover the core capabilities of the products under evaluation. Only then can the real exercise begin -- determining if those capabilities meet identified needs.

Given an analyst's overwhelming need to categorize and compartmentalize, Fig. 1 introduces the concept of stages. A "stage" represents both an application class and the ideal tool suite for developing that class of applications. Stage 1 applications are relatively simple in domain scope and complexity, while Stage 2 applications represent midrange functionality. Stage 3 applications sit at the high-end of the scope/complexity scale.

In and of themselves, stages are neither inherently good nor bad. The bulk of early 'Net-enabled applications entail Stage 1 complexity. Moreover, responses to SPG Analyst Services' annual research survey reveal that the majority of I/S organizations typically implement Stage 2 applications. We believe that issues arise only when there is a mismatch between an application stage and a tool suite stage. If, for example, a company selects a Stage "n" tool suite for a Stage "n+1" application, the tool will not have the horsepower to implement that application efficiently and effectively. However, we use the terms efficiently and effectively advisedly. If enough bodies are thrown at a software problem, a solution can usually be manufactured.

Whether that solution is the best use of those resources is another matter altogether. Another mismatch can occur if a Stage "n+1" tool is selected for a Stage "n" application. Having room for growth is certainly not a bad thing. However, you don't need a backhoe to dig a hole for a rose bush.

As major paradigm shifts occur, the nature of applications (and the toolsets for developing those applications) tend to restart at Stage 1 and, over time, migrate to Stages 2 and 3. With the advent of client/ server, for example, many early applications were elementary Stage 1 applications. As the I/S community gained experience with the client/server model and the tools matured, the more complex Stage 2 applications became prevalent. By late 1996, and continuing into present day, both I/S and tool maturity has progressed to the point where complex Stage 3 applications are commonplace.

Today, we are in another paradigm shift. The 'Net and component-based development have reset the meter, and companies and tool vendors alike are moving into another adoption cycle. The sections below present a subset of what we believe are the major application/ tool qualifiers that must be addressed as we move into this cycle.

App complexity factors

TRANSACTION TYPE -- In a very short time, we have watched a microcosm of transaction types (and their natural evolutionary path) with 'Net-enabled applications. At first, the major function of these applications was primarily Stage 1 data access. This was most commonly creating HTML pages, but increasingly included data from a system data store. From data access, I/S organizations moved to the collection of transactional data for subsequent offline batch updates (Stage 2). Today, with the increasing appetite for state-of-the-business information and round-the-clock operation (and the resulting decrease in available batch processing time), the demand for Stage 3 real-time updates -- and the tool suites that support such updates -- is rising.

TRANSACTION TOPOGRAPHY -- Typical Stage 1 topography deals with a single data store, usually a relational database. Within this context, concurrency and synchronization of multi-user access is handled easily by the database software. With Stage 2 topography, the data store issue extends to multiple but still homogeneous sources. As in Stage 1, modern-day database software (relational or otherwise) inherently accommodates multiple users across multiple databases. With Stage 3 topography, the venue broadens to include multiple, heterogeneous data stores. Note that we use the term data store in the broadest sense of the term. A data store can be a relational, object-relational or pure object database. However, it can also be non-traditional, such as an enterprise resource planning (ERP) application, whose activities must be included in a given transaction.

TRANSACTION COMPLEXITY -- This metric is a measure of business rule complexity, for example, the actions to be performed within the context of a given transaction. In Stage 1, the actions may be only a validation of the data format, such as numeric, alphabetic and so on. The actions may also be domain specific -- validating that the data is a bona fide customer name or number. Here, we would look for tools to provide a totally automated facility for validating data formats, as well as clean, easily managed exception handling for domain-specific errors. Increasing application complexity translates into increasing business rule complexity. Stage 2 involves sequences, such as master-detail processing, and we would look for tool suites that provide a framework for these frequently used processing models. Stage 3 transactions entail intricate and company-specific sequencing. Here, I/S organizations must evaluate the language used to specify those rules, as well as the mechanisms for defining the sequencing. If it takes 12 months to specify, create and/or modify those rules, for example, any competitive advantage may well be lost.

TRANSACTION RATE AND NUMBER OF USERS -- Thanks to the hard lessons of the early client/server systems, most I/S organizations are attuned to this criterion. Unfortunately, there are no hard and fast guidelines (in our opinion) to differentiate Stage 1, 2 and 3 transaction rates and/or the number of users. A server that can easily handle a 100-user Stage 1 application could be brought to its knees under the load of a 10-user Stage 3 application with highly intricate business rules.

That said, we believe there are performance/scalability differentiators I/S groups should seek in the tools they evaluate. Stage 1 tools should support mechanisms for overlapped computation, which today means multithreading. Additionally, since the easiest method for increasing throughput is to upgrade the hardware system, we believe the tool should also support a range of increasingly powerful hardware platforms.

Stage 2 tools should bring the facility for non-blocking (asynchronous) operations, as well as database connection pooling. For maximum performance/scalability optimization, Stage 3 tools should support a true n-tier architecture, automated partitioning facilities and load sharing.

SUPPORT FOR EXISTING COMPUTING ASSETS -- While technology continues to evolve at a whirlwind pace, I/S organizations cannot "turn on a dime" due to the millions of dollars invested in existing systems and infrastructure. Stage 1 tools generally support a newer and typically homogeneous platform set. With the rush of Web-based technology, several of today's tool suites are, in fact, 'Net-only. Stage 2 tools broaden platform support to a heterogeneous set of platforms that encompass the more popular systems of the day. Stage 3 tools provide the broadest platform support. Stage 2 and Stage 3 tool suites also allow a phased migration to the newer technologies.

INTEGRATION AND INTEROPERABILITY -- Research shows that this aspect of application development is becoming more and more critical to I/S groups. With the move to component-based development, integration and interoperability obviously play major roles in Stage 1 self-contained applications. While object models and their implied middleware connectivity capabilities garner top press, these are not the only intra-application connectivity facilities in use. As organizations develop more complex applications, interapplication issues, such as mainframe connectivity and ERP application connectivity, arise. In some cases (Stage 2), the target application will always be "listening" and ready to accept data from the sending application. However, in other cases (Stage 3), the target application may be active or dormant. Because the data has to be queued in the latter instance, the sending application will not know for an indeterminate time if the data was: a) successfully received, and b) successfully processed.

AVAILABILITY -- This area reflects the effects of system downtime. In Stage 1, downtime has minimal business impact, although the aggravation factor for users may be quite high. Classic backup/restore procedures suffice as a recovery plan. Stage 2 systems are moderately sensitive to downtime, and will have a more elaborate plan to recover from a failure, such as local transaction logging with an automated recovery. Stage 3 applications are the most sensitive to downtime and frequently have 24x7x52 availability requirements. Here, we see the need for server failover, full-blown disaster recovery programs and support for high-availability systems, such as clusters and local/remote shadow (redundant) systems.

The above are a subset of the application complexity factors that ultimately determine the most appropriate development tool or tools. However, application complexity is not the only differentiator in the development tool market.

Life-cycle management/support

As application complexity increases, so too does the depth and breadth of the infrastructure needed to support the development effort. For example, the functionality needed to develop and maintain a Stage 1 application is a very small subset of the functionality needed to create and maintain a large, geographically distributed information system.

We have long felt that the initial development aspect of the life cycle receives an inordinate amount of attention. Yes, this aspect is critical and the resulting applications can often put a company in a very competitive market position. Nevertheless, if that application is a bear to modify in the light of changing market conditions, that competitive advantage will soon evaporate.

The number of potential tool types involved in the development of serious applications is significant. Very few tool vendors have the resources to "do it all." That said, the need for integrated life-cycle tool support has not been lost on the vendor community. Companies such as Platinum Technology and Rational Software are making their play in the infrastructure market. Rather than focus on the development tool per se, these firms are targeting the auxiliary tool suites supporting distributed systems development. Additionally, many of the development tool vendors currently augment any homegrown capabilities with a strong life-cycle alliance program. Finally, there are companies, such as Compuware and Computer Associates, that have the resources and the aggressive acquisition strategy to do it all.

Tool selection is not easy. Whatever the choice, I/S organizations seem to be consistent in one aspect -- the value of a proof-of-concept. While limited in scope and investment, the use of real data with real processing models and issues can either jump-start a project or prove that the wrong set of tools was chosen for the task.