Systems Thinking and Unintended Consequences
- By John D. Williams
- August 21, 2000
Do you find it ironic that the meltdown at Chernobyl occurred during the testing of enhanced safety features? When the ARPANET was created, did its inventors anticipate the growth of online multi-user games, cyber stalking or the crumbling of privacy? Did the early advocates of workplace automation intend to start a new epidemic of carpal tunnel syndrome?
There are many examples of systems that have unintended consequences. Some of these consequences are the result of unanticipated interactions, while others are the result of derivative effects. For example, while the initial “harnessers” of electrical power may have anticipated electric lighting, it is less likely that they anticipated the creation of skyscrapers made possible by electric elevators. It is even more unlikely that they anticipated the way in which electrical machines have replaced brawn and brainpower.
What does all this have to do with components? Plenty, as it turns out. Consider the drive to become an e-business. The walls that once separated systems are being torn down. EAI tools and component technology now serve as powerful mechanisms for integrating enterprise systems. And the old approach to creating stovepipe systems is dying a well-deserved death. But system integration is about more than tools and clean interfaces — interactions and consequences need to be considered.
So why aren’t current approaches to system design adequate? Unfortunately, we often find ourselves bitten by fragile systems that behave in unexpected ways because we do not design systems with system principles and issues in mind. The current approach is analytical thinking, which is focused on understanding independent variables. With analytical thinking, you take a system and break it into its constituent parts. Once you understand the parts individually, you can assemble them into a whole, which is the sum of its parts. This is a useful approach to systems, but it is inadequate for understanding how systems truly behave.
What we need to do is add holistic thinking to the mix.
Holistic thinking focuses on the interdependency between variables. Systems exhibit behavior that arises from their wholeness; just understanding each part is not enough. We need to understand how the interactions and dependencies between components create additional properties and behavior. This does not mean that we drop analytical thinking; instead, the approaches to system design are complementary. While we cannot simply pick one or the other, it does matter which one you choose first. Holistic or systems thinking is not simply a collection of a few more tools to add to our bag of tricks. It is a philosophical approach that shapes the outcome of system design.
Initially, systems were viewed as machines or closed-loop systems whose rigid structure defined their function. Analytical thinking fits this model well. Systems are simply the sum of their parts, assembled in defined ways to create specific behaviors. This is where most software development resides in its thinking about systems. It’s a perfectly adequate approach for some systems, and it works best when we treat systems as isolated creations. But the integration required by e-business doesn’t work well with this perspective.
In the next stage, systems were viewed as biological creations. The biological model views systems as an open loop with single-minded, purposeful behavior. The system responds to instability in the open-loop environment by adjusting its actions to meet its defined goal. A thermostat provides a simple mechanical example of this type of system. The temperature of the environment changes and the thermostat turns on either the air conditioner or the furnace to achieve the temperature that is its goal. Biological systems have a higher goal of survival that manifests itself as growth. As long as an organism continues to grow, though not too much or too little, it continues to survive. Biological systems also exhibit self-organizing behavior. Closed-loop mechanical systems move toward entropy — they wear out. Biological systems create and renew structure. Developers working on adaptive systems often create designs that fit this model.
Machine learning systems, such as those used in agent technology and robotics, also fit this model.
The third stage views systems as social entities. The social model views systems as being composed of a voluntary association of purposeful entities that have their own choice of goals and the means to achieve them. But purposeful is not the same as goal seeking. Goal seeking means that you have alternate means of achieving a single goal. Purposeful means you can change the goal as well as the means. In the social model, integration is a continual process. In contrast to mechanical systems that you assemble once, social systems constantly change, and integration is a necessary and ongoing process. This process requires filling the purpose of the individual entities and aligning their fulfillment with that of the whole. Members are held together by common objectives and agreed upon ways of pursuing them. Consensus is essential to alignment. Agent-based marketplaces fit the social system model. Those creating B2B marketplaces encounter the same system issues.
If systems can be viewed in these ways, how do we begin to understand their characteristics? What principles do we need to consider when designing e-business systems?
In his book, Systems Thinking: Managing Chaos and Complexity, Jamshid Gharajedaghi defined five system principles: openness, purposefulness, multi-dimensionality, emergent property and counter-intuitiveness. Openness means the behavior of a system can only be understood in the context of its environment. Open systems are guided by a code of conduct, whether that is DNA or culture. When left alone, open systems tend to reproduce themselves. Typically, we evaluate a system’s environment and identify variables that can be controlled and those that cannot. As we look at more open systems, we might identify environmental variables that can at least be influenced if not controlled. We can create systems that influence and shape their environment. In e-business, we want to shape our outcome, not simply be passive residents of our environment.
Purposefulness tells us why systems do what they do. Information tells us what systems do. Knowledge tells us how. Understanding shows us why they work as they do. There are four ways in which systems vary in their purposefulness. Some are passive — they just collect data. Others are reactive — like a thermostat, they are state-maintaining. Some systems are responsive or goal seeking. An example would be BDI architecture agents that may take different paths to achieve a set goal. Finally, systems may be active and purposeful in changing their goals as well as their means of achieving them.
Multi-dimensionality lets us find complementary relations in opposing tendencies and create new wholes from parts that were considered contradictory. The key to multi-dimensionality is to think not of dichotomy but of continuum. Rather than thinking in terms of something being black or white, we need to think about shades of gray. We can combine parts in new ways and define new relationships between parts. Opposing tendencies may combine in new ways to create new relationships and behavior.
Emergent properties are the properties of the whole. They are not quantifiable like typical attributes or properties. Emergent properties are more than the sum of their parts. On a baseball team, for example, we can look at individual statistics to see how players perform. But that will not necessarily tell us how the team will perform as a whole. Teamwork is an emergent property that is a result of player interaction. Good teamwork can improve a team’s performance significantly; poor teamwork can hurt a team regardless of an individual player’s statistics. With emergent properties, we can measure their manifestation, but cannot measure them directly. Emergent properties are often the result of ongoing system processes. They are not one-time goals that can be achieved; they must be reproduced continually. If the underlying system processes go away, so do the properties.
Counter-intuitiveness tells us that actions intended to produce one outcome may instead produce the opposite. Remember the example of the Chernobyl meltdown? Cause and effect in systems can be separated in time and space. A cause may have a delayed effect. A given cause and effect can also replace each another in a circular fashion. Also, a cause may have multiple effects. Counter-intuitiveness also tells us that a difference in degree may become a difference in kind. For example, catastrophe theory tells us that continuous causes can have discontinuous effects. Would increasing your salary 10 times make any difference in your life style? You would still have a salary (continuous cause), but your life style would probably change (discontinuous effect). Software systems often exhibit problems in this area when it comes to scalability.
The demands of e-business require that we build more sophisticated systems to meet constantly changing business needs. We can only build these robust systems if we practice systems thinking and apply systems principles. I hope you are intrigued enough by the concepts presented here to begin exploring them for yourself. Better systems are worth the effort. 1
John D. Williams is a contributor to Application Development Trends. He is president of Blue Mountain Commerce, a Cary, N.C.-based consulting firm specializing in enterprise, domain and application architectures. He can be reached via e-mail at [email protected].