Open Source For A Closed World
Higher quality software is the result of a better development process, which may be enhanced by improving the quality of the people or the products used in the process. In the absence of a formal process to ensure the correctness of code, the best solution is a better process for review—in order to discover problems—and correction.
The fact that this has been known for at least three decades in no way mitigates its relevance today. Nearly 30 years ago, Jerry Weinberg—one of my favorite authors and the best at exposing the people issues inherent in technology problems—rose to prom-inence with the publication of The Psychology of Computer Programming (Van Nostrand Reinhold, New York: 1971). Perhaps the best remembered contribution of that volume was the concept of "egoless programming," in which Weinberg revealed that the best software developers were those who shared code within a team to expose and correct errors (a practice he referred to as "open programming"). His discourse on the importance of peer reviews of complex systems remains valid today.
A few years later, Weinberg and my colleague Daniel Freedman published the Handbook on Walkthroughs, Inspections and Technical Reviews (Dorset House Pub., New York: 1990), which remains the reference work on the topic.
Today, two worlds that took different approaches to software development decades ago are colliding. The first embraced the open approach and took it beyond the single corporate teams envisioned by Weinberg. In this world, communities organized themselves around projects (programs and systems) like GNU, Linux, Apache and Perl; principles such as software being free (go to Free Software Foundation, www.fsf.org/fsf/fsf.html); and collections of communities with similar interests like those represented on www.slashdot.org, which bills itself as "News for nerds. Stuff that matters." A common thread in this world is the idea that the source code should be available (open) to users to facilitate understanding, repair or enhancement, whether or not fees are charged for the software.
In the closed world, corporations are the basis for organization and source code is treated as a trade secret, not to be shared except perhaps with an escrow agent. While most commercial software firms adopted some aspects of the review process to test their code, the review was typically confined to a department assigned to find errors or to the maintenance team once a product reached the market (or was released, in the case of internally developed systems).
As the Linux community has demonstrated, however, more people reviewing the code means faster error detection, and a strong community means faster correction. Given the increasing complexity of the systems being developed and the relatively stable capacity of the people who are developing them, widespread adoption of open source methods is a virtual certainty. Separating the detection from the correction by using lots of beta sites with centralized development may look like a middle ground, but as long as the source is closed, it is clearly a closed world strategy.
The hackers—and I use the term with appropriate respect to refer to those who derive pleasure from solving complex systems problems—who have contributed to the body of open source code have taken egoless programming and stood it on its head. For many, the ego gratification of peer recognition is the most significant payment received for significant intellectual achievements. This is not to imply that software developed for profit can't be open. It just requires a mindset and business model shift.
I believe that this shift is evolutionary, and will be forced on vendors by users in this decade. For an excellent analysis of a compatible alternative, see the superdistribution model (free distribution, pay per use) model espoused by Brad Cox (Superdistribution: Objects as Property on the Electronic Frontier, Addison–Wesley, Reading, Mass.: 1996.)
The worlds are now clearly colliding, as some of the leaders of the second world are building closed products to run on open products (Oracle 8i to run on Linux, for example) and with them (Linux running native or with an IBM OS). Part of this movement is politically motivated: It decreases the ability of one company to dominate a market. But the bigger force is the desire to improve quality and expand opportunities for innovation, which is only guaranteed if the source code is available to the user. The cost and risk of using closed systems, which require the services of the original vendor for upgrades, is simply too high to be sustained for another decade.
Interest in open source software has gone beyond the business press' fascination with Red Hat (a company that saw the evolution as a commercial opportunity before most of the world saw it at all) and the endorsement of specific programs by the old guard in response to the reality that their users are demanding open source solutions. Now we see the Chinese government supporting Linux in an apparent effort to forestall Microsoft's dominance there, and a Japanese consortium consisting of Sony, Toshiba, Hitachi and more than a dozen others that have banded together to develop a Linux-based OS for consumer products.
The question now becomes, "What is the next logical place for an open source approach to bear fruit, and how do IT managers prepare to benefit?" The answer is clearly that it will be used for applications. This is also an evolutionary move, and a logical extension of the trends shown in Figure 1. Here we see that despite attempts to define a legal distinction between systems software and application software, there is a natural tendency for functionality to migrate from the top down, constantly blurring any imagined line of demarcation. As the realm of interesting technical problems expands from the systems domain to the applications domain, hackers will follow.
Figure 1 shows that the value of software to differentiate a business (defined as functionality that an end user will notice and pay for) increases with its level of abstraction. That is, the farther it is from the machine, the more valuable it is to the end user (with the obvious exception of firms whose actual product is low-level software and infrastructure applications) and, often, the more it costs to maintain.
As soon as an application is released, however, it begins to lose its ability to differentiate, and similar functionality starts to become available at lower levels. Without patent protection, most functionality can be copied if the infrastructure provides sufficient flexibility. For example, less than a decade ago, several firms developed and marketed grammar and spell-checking applications that complemented word processors. Today, none remain because that functionality has migrated below the individual application level into the suite functionality level. Further migration into the OS and even the hardware is possible, as we have seen with some security functions.
These observations provide the basis for advice I have given to consulting clients for years— identify the differentiating components (usually no more than 30% of the total system components) and own them or the rights to them, but share the rest with as many people as possible. While five years ago I would have said that the best way to accomplish this was to collaborate with your competitors via an industry consortium or an integrator willing to build and license components, today the answer is to start looking at open source solutions. We will soon see vertical consortia sponsoring open source projects to support the requirements of their members. Today, industry leaders should be investigating opportunities to place their own code in the public domain in source form to limit their ongoing maintenance burden and to improve the overall quality of the systems.
Time for an action plan
If you manage the development of applications or the selection of IT infrastructure for your organization, it is time to evaluate and adopt elements of the open source process, as well as products developed this way. While you may feel comfortable with Linux or Apache based on their endorsement by giants such as IBM and Oracle, it requires more vision to take the next step and plan to build some of your own systems this way.
Now is the time to take a look at www.slashdot.org to find out what the hackers are doing (it's the virtual water cooler of the hacker world), and to also start to evaluate open source app infrastructure firms like Open Avenue (www.openavenue.com), which help users manage the collaborative development process. It is time to talk to your peers who work for competitive companies—perhaps through intermediaries to avoid the potential regulatory problems associated with such collaboration—to see how your entire market segment can benefit from the development of relevant, open source applications. Look at it this way: If it really is evolution, as I suggest, do you want to be the last one in the swamp when your peers have staked out beachfront property in new markets? I don't think so.
Disclaimer: I have been invited to join an advisory board for OpenAvenue, but currently have no business relationship with them. I would have included them in a list of similar businesses, but so far they appear to be alone in this space.
Adrian J. Bowles is research director of the IT Compliance Institute (ITCI) and a research fellow with the Robert Frances Group. He can be reached at firstname.lastname@example.org.