In-Depth
4GLs gear up for4GLs gear up for full-cycle development
- By Sandra Taylor
- July 12, 2001
Once upon a time, 4GLs were primarily tools for constructing distributed applications. The 4GL tool came
with a mandatory GUI painter, a high-level language for specifying logic, a debugging facility, and a runtime infrastructure
to connect and coordinate the various pieces of the deployed application. Powersoft Inc., Burlington, Mass., built
its vast empire on just those capabilities.
Subsequently, companies like Forté Software Inc., Oakland, Calif., pushed into the realm of object-oriented
programming and application partitioning. Still, the core of the 4GL tool was centered on the development portion
of the application life cycle. But those were the early days, and we are not in Kansas anymore, Toto. Today's 4GL
landscape is increasingly popular and increasingly rich -- not only in terms of high-productivity development features
and functions, but more importantly, in the growing ability of these environments to support (directly or indirectly)
the full application life cycle. Increasingly, development is only one step in a larger process.
Press, analyst, and even corporate development groups can become fixated on the latest "hot" technology.
In the early half of this decade, the rage was rapid application development (RAD). Today it is the Internet, object/component
technology, and Java. While hot technology is a viable focus for the press and analyst communities, focusing exclusively
(or even predominantly) on one aspect of the application life cycle is courting disaster for organizations in the
process of developing new applications and selecting the tool suites used for that development. The more critical
and potentially long-lived the application being developed, the more essential it is to look at the total picture.
With this application development growth comes the hard reality that a GUI/RAD development tool is not enough.
The systems and processes are too complex. Better to think about these issues during the tool selection process
than be blind-sided once development is underway or after the application has gone into production.
The new check list
Nearly every organization in the midst of tool selection has its feature/function check-off list. While it's
hard to capture the essence of a tool suite in such lists, they do provide a consistent way of evaluating all the
tools under consideration. Given the evolving state of 4GL environments, there may be some new categories you want
to add, and perhaps, a new perspective on these tool suites. SPG suggests that developers examine his new perspective
mindful of the life cycle view. The phases are straightforward.
PHASE 1 -- ANALYZE AND PLAN. In this phase, the team defines system requirements
and develops implementation plans. This phase is essentially labor intensive in terms of specifying exactly what
the system will do, obtaining the buy-in of parties with vested interests, and developing an implementation plan.
In the days of RAD, this phase was often short cut, or was nearly eliminated altogether in favor of on-the-screen
application definition. Today, with the ease and speed with which virtually any computer literate department can
develop Web pages, more than a few organizations are finding history repeating itself -- but that is a story unto
itself.
Once requirements are defined, there are software packages that help manage the status of these requirements,
in other words, record the initial requirements and provide a formal methodology for managing changes. While today,
we know of no Requirements Management packages that have a direct link with development tools per se, we are beginning
to see the integration of these infrastructure tool suites. Platinum Technology Inc., Oakbrook Terrace, Calif.,
is well on its way. And with Rational Software Inc.'s, Santa Clara, Calif., numerous acquisitions, the direction
of that company is clear. Integration is also proceeding across company boundaries. Atlanta-based Technology Builders
Inc. has announced a Requirements Management package that integrates with Mercury Interactive's test management
software.
Another class of software -- project management -- provides the framework for monitoring implementation plans.
Over 60% of the respondents to SPG's annual survey of Fortune 1000 IT organizations currently use a project management
tool. [For more on project management software integration, see "When is project management like driving to
New York City?" this issue.]
PHASE 2 -- MODEL. DESIGN APPLICATION AND DATA USING PICTURES. While vendors with
modeling facilities cringe at the term "pictures," this is essentially what modern-day modeling facilities
offer -- flow control, dependencies, and the like using a graphic lexicon. SPG research shows an increasing use
of modeling facilities, not only for the complex, high-end applications, but for the less complex, mid-range applications
as well. This is largely due to the inherent understandability and consistency of a visual metaphor. The process
of visually identifying the pieces of the application puzzle will clarify issues that might not otherwise surface
until later in the project.
The second factor is the interactive nature of modern modeling facilities. While the term "modeling"
was suspect in the days of CASE, modern-day process modeling tool suites have come a long way. Unending waterfall
development models and a design-everything-before-coding-starts mentality are becoming things of the past. Today's
suites are much more suited to the visually-oriented, iterative development models.
Some tools, like Oracle Corp.'s, Redwood Shores, Calif., Designer 2000, carry the design through to code generation.
Others, like Rational's Rose, generate a component specification and the interface definition, and leave the actual
coding of the method to the developer. And, increasingly common is the ability to provide "round trip"
engineering -- generate from the graphical model, allow developers to hand-code changes using the chosen development
tool, then incorporate those hand-coded changes back into the graphical model.
One interesting twist to modern modeling is for IT organizations to first prototype an application in a simple,
low-cost language, like Visual Basic, then move to a more powerful (and typically more expensive) 4GL for production
if the prototype is successful. Given the wide support for VB among the modeling tools, it is conceivable to take
a model used to generate a VB application and re-tool the output for a 4GL environment. Add the increasing popularity
of the Rational-initiated Unified Modeling Language (UML), and we see the trend toward a commonly-accepted modeling
lexicon that can be used for prototyping and then enhanced if and as the full-scale development effort occurs.
This process is not a panacea -- the technologies are still evolving. However, the direction is clear and should
at least be considered by IT organizations in the process of developing new mid to high-end applications.
Another aspect of modeling is data modeling. Where process modeling fell from favor following the CASE heydays,
data modeling has always maintained a strong following. From a graphical representation, the data modeling tool
essentially "defines" the database, in other words, the schema, the DDL, and so on. Today's 4GLs should
have the ability to access those definitions and automatically generate data-aware objects for use in the application.
Note that most data modelers can also reverse engineer (in other words, generate the graphic representation of)
an existing database.
PHASE 3 -- DEVELOP. THE CLASSICAL FOCUS OF THE 4GL TOOL VENDORS. The 4GL tool vendors
have invested heavily in the hot new technologies, and at this point, they all have Java, Internet, and object/component
strategies. Some are further along the development curve than others. However, by this time next year, SPG Analyst
Services believes virtually all vendors will be delivering products that support these technologies.
What is more interesting is how they are supporting these technologies. For example, a few years ago, object-oriented
(OO) software burst on the scene with encapsulation, inheritance, and polymorphism and the promise of high productivity
and reuse. Most of us mortals could grasp the concept of encapsulation. Some of us could even theoretically understand
inheritance and how it might be applied in our applications, however, we were not willing to bet our mission-critical
systems on the fact that we understood it perfectly. As for polymorphism, we are sure there is someone who understands
the concept well enough to use it, but those "someones" are few and far between. Thus, the concept of
pure OO software ebbed in favor of what we now call components -- essentially the encapsulation part of OO.
Well, it turns out there are people who understand OO, and they are actually putting it to use in a way that
benefits even the most mortal among us. Consider IBM's VisualAge Generator and its GUI facilities. IT developers
see the ability to easily define base-level properties and a look-and-feel for all the screens in an application.
Once defined, those properties are automatically propagated through the remaining screens in the application. Under
the covers, it is the OO inheritance facilities of Smalltalk that carry those properties throughout the application.
IBM and several other 4GL vendors are taking advantage of OO technology to provide a high-productivity development
environment, while hiding the technology's inherent complexity.
The other state-of-the-practice for 4GLs is the availability of application wizards that, with minimal definition,
will generate a complete CRUD (Create, Read, Update, Delete) application, including GUIs, simplistic business logic,
and database access. Now the availability of this facility is not altogether altruistic. For many IT organizations,
a high-end 4GL -- especially an OO4GL -- is a new and different animal. To help jump start the application development
process and give developers a chance to "see how it's done," vendors are providing products that comprise
a completely functional, albeit basic, data access application.
In an effort to differentiate themselves in the arena of high-productivity tools, some vendors have carried
application wizardry into the realm of frameworks (or templates). A framework is a structure that includes a significant
amount of base-level functionality and the interprocess communication between the various vendor-provided elements.
Some frameworks are "horizontal" and provide commonly-used general functionality, such as Dulles, Va.-based
Template Software's framework for workflow. Other frameworks are "vertical" in nature and provide industry-specific
functionality, such as Template's frameworks for the telecommunications industry or Cary, N.C.-based Seer Technologies
with its recently-announced strategy of providing financial services frameworks. SPG Analyst Services division
anticipates we will be seeing more and more of this approach. Frameworks eliminate the need to code much of the
base functionality commonly found in an application. Yet they also provide the "hooks" for a development
group to customize the application and tailor it to the specific and competitive needs of the business -- something
that tends to be lacking in a full-blown packaged application.
PHASE 4 -- TEST. VERIFY THE PROPER OPERATION OF THE APPLICATION. This particular
area is, in the opinion of SPG Analyst Services, currently one of the two most difficult phases of the distributed
application development process. While every vendor provides a native debugger for testing the application in the
single system development mode, the real issue is how to debug an application that spans application servers, database
servers, GUI clients, and now, Web-enabled clients. Some tools have discovered the advantage of agent technology
in, for example, round-trip tracking of a transaction. But generally, when a distributed system has a bug, it's
a major task to isolate the cause of the problem.
Another issue with distributed applications is performance. This was the bane of early client/server systems.
As users and/or workload increased, these systems had the unpleasant tendency to hit the "scalability wall."
Generally, in the early days, there were unfortunately no tools to help those early pioneers foresee that wall.
Only automated software testing tools can drive a distributed system with the goal of uncovering timing and accuracy
problems.
We see increasing integration of these tool suites with the 4GLs -- Mercury Interactive Corp., Sunnyvale, Calif.,
has deals with Dynasty Software Inc., Redwood Shores, Calif., and Forté Software Inc., Oakland, Calif.,
Segue Software Inc., Newton, Mass., has deals with Forte and Oracle Corp. and Rational's SQA test software group
has deals with Sybase subsidiary Powersoft, Concord, Mass.
PHASE 5 -- DEPLOY. MOVE THE APPLICATION ONTO THE RUNTIME SYSTEMS. In the early
client/server days, just when the IT teams thought their work was done, along came the issues of deployment (and
re-deployment in case of bug fixes and upgrades). A first concern was what exactly were the latest revisions of
the application modules. Then came the time, cost, and often manual effort involved in updating hundreds of clients.
Times have changed.
Now we find configuration management and version control products from Intersolv Inc., Rockville, Md. Platinum,
and others, as well as home grown capabilities within the 4GLs themselves. Meanwhile, the Web and automated distribution
products from companies such as Marimba Inc., Palo Alto, Calif., Novadigm Inc., Mahwah, N.J., and IBM subsidiary
Tivoli Systems Inc., Austin, Texas, have solved many of the physical distribution problems. The question now is
whether your development environment interfaces with these products. In a growing number of cases the answer will
be yes.
PHASE 6 -- MANAGE. MONITOR AND CONTROL THE RUNTIME ENVIRONMENT. Management of a
distributed system takes many forms. There is network management, in other words, monitoring the traffic over the
network and looking for both bottlenecks and potential ways to enhance the movement of information. There is database
management, in other words, the ability to monitor and potentially alter the database properties.
And there is application management, in other words, the ability to monitor and control the application itself
-- start and stop partitions, check usage and/or error statistics, and dynamically re-partition an application.
The mainframe world solved the management problem a long time ago. In the world of distributed computing, we
are just beginning to get a handle on the technologies that truly allow us to manage the varied aspects of a running
application environment. Some of the 4GL vendors like Forté and Oracle have figured out how important this
facility is and have implemented native management facilities.
Depending on the application and the environment, such facilities may meet the need. In complex environments
where more management structure is required, independents like Tivoli offer a more viable alternative, and in fact,
it is not uncommon to find the 4GL environments having links to these system management packages.
PHASE 7 -- MAINTAIN. INCORPORATE ON-GOING CHANGES/BUG-FIXES. Here we have the poor
country cousin of the application life cycle. Ironically, this phase of the cycle ultimately consumes the most
resources. While the year 2000 issue is an aberration, it does drive home the potential issues (and price tag)
surrounding maintenance.
Maintainability takes several forms. One aspect is how quickly an IT organization can respond to changing business
conditions with new applications and/or changes to existing business rules. By the very nature of a 4GL, such revisions
should be faster to implement than say, a 3GL. The other emerging trend -- the use of object/component technology
-- should also make the rules easier to locate and modify by virtue of the self-isolating nature of this technology.
At this point, the specifics of the 4GL take over, and not all 4GLs are created equally.
Vendors love to demo their GUI facilities. Well, guess what? By now, virtually all vendors have a modern and
viable GUI facility with point-and-click, drag-and-drop, etc., etc., etc. The issues lie in the facilities sitting
behind the GUI. What does the business logic language look like and how easy is it to use? Some are very high-level,
but sacrifice the ability to code lower-level functions. Others are more difficult to learn, but provide that lower-level
functionality. The point is that IT organizations need to look behind those GUIs and evaluate the nature of the
4GL in light of their development team's expertise and background. We also believe IT organizations need to "walk
through" the maintenance process with the vendors they're evaluating. What's involved in updating, testing,
and deploying user screens, application-server-based services, database services?
Native versus alliances
To provide these life cycle capabilities, tool vendors are taking various paths. In providing modeling facilities,
companies like Oracle, Powersoft, and Compuware Corp., Farmington Hills, Mich., have chosen to control their own
destiny with native modeling facilities. For such companies, the native facility is not exclusive, and in the case
of modeling, there are other companies that provide modeling front ends. Meanwhile, companies like Dynasty and
Forté effectively say modeling is not part and parcel of their core competency and partner with one or more
modeling specialists.
The only general trend in terms of native versus alliance derived life cycle support is that the larger the
company, the more native support they tend to offer. The caveats to the potential buyer are one, be sure to evaluate
the facilities of the native software (be it modeling, or system management, or even facilities such as middleware)
versus the facilities offered by best-of-breed vendors; and two, weigh the pros and cons of single-source versus
the strength and integration of multisource alliance-based technology.
Feature/function comparisons are still a mandatory part of the process of evaluating a 4GL tool suite. But,
a recent batch of new technologies (Internet, OO/component technology, Java) will, in the relatively near future,
become non-differentiators for the majority of these tools. The core capabilities of a 4GL will still be the major
factors that determine its strengths (and weaknesses). What is changing is the ability of these tools to address
the major issues surrounding the application life cycle and the rising awareness of the importance of these issues
in IT organizations. The 4GL tool market is crowded, and vendors are seeking ways to differentiate themselves.
Life cycle support is a crucial differentiation.