CitationDownload as .RIS
Emerald Group Publishing Limited
Copyright © 2001, MCB UP Limited
What's this "systems" stuff anyhow?
The performance improvement required to reach Six-Sigma or "ultraquality" levels cannot be achieved without a proper understanding of the concept of a system. Deming emphasized that fact over and over, and it is at the heart of his idea of profound knowledge. Yet the way most managers were trained to think implicitly ignores the system concept.
In an economy that is global in scale, and one in which technological and product life-cycles turn several times inside many companies' decision radius (time to make a decision), it is necessary to understand three critical and interrelated concepts: a system, emergent properties, and complexity. We need to explore each of these concepts:
System: a set of different elements so connected or related as to perform a unique function not performable by the elements alone (Rechtin and Maier, 1997).
Emergent properties: those functions, attributes or behaviors, good or bad, which would not exist except for the operation of a system.
Complexity: something that is composed of interconnected or interwoven elements that function as a system to produce emergent properties.
A system is an aggregation of elements, but not just any aggregation will do. Several requirements must be met before we can call something a system. First, there must be emergent properties, i.e. it must provide something that is not available with the parts alone. That requires interconnection and interrelationships.
Deming was fond of pointing out that it is not sufficient to gather together all of the best automobile parts in one place. That collection does not make a system. To become a system they must be designed to work together, or the connections that make the system called "automobile", which provides the emergent property called "transportation", will be missing and the parts will remain a useless pile of materials.
A good system both is purposeful and has value to some customer or user. A farm tractor is a system similar to an automobile: it has four wheels, engine, seat, transmission and (today) an air-conditioner. But if you do not have to plow a field, and do have to transport a family around town, then a tractor might be purposeful, but does not have value to you. Lacking either purposefulness or value renders the system little more than an object of curiosity.
Most good systems also are feedback controlled systems. In order to maintain its current state, and improve in the future, some form of feedback is required, but the feedback is also a source of complexity and has both advantages and risks.
It is the emergent properties that give a system life, and it is both the nature of the elements and their interrelationships that give rise to the emergent properties. To give rise to the "family transportation" emergent property an automobile must have certain components, and they must be configured in a way that gives them the correct interrelationship. For example, you might have the best tires and the best axle, but if they are at right angles to each other (Figure 1), then their interrelationship does not allow the emergent property needed to work in an automobile system.
Figure 1 Interrelationship of components
Which emergent properties are useful depends on the nature of the system. But here is the "dark side of the force": not every emergent property is useful ... some are either wasteful or dangerous. Some systems do unexpected things, show surprising behavior, or result in unintended consequences.
A system produces emergent properties, and that implies the existence of the interconnections and interrelationships that make complexity. It is in the complexity of the system that both the opportunities and the dangers lie. The complexity gives rise to the emergent properties that define the system and make it valuable, but it is also the source of the most serious, and least expected, problems.
It is in the nature of complexity that most people lose the system's bubble. Complexity was not even studied seriously or widely until the 1970s, and when it was studied some surprising ideas emerged. It was discovered that complexity is on the cusp between stability and chaos, and that only a small amount of perturbation is needed to push the system one way or the other.
In recent years, some people have attempted to apply chaos and complexity theory to management systems. It is noted that complexity gives rise to new systems that did not exist before. Some people seem to believe that this leads to a kind of mystical creativity that can lead to ultimate success. Although there was some good material published, there is also a lot of evidence that many of the authors did not understand the nature of complexity.
To be sure, a complex system can erupt into chaos, and from that chaos can emerge new stable structures. "Order out of chaos" is the buzz word used to describe this very real process, but what is rarely addressed is the fact that you might not like the new stable state. In other words, there is no guarantee that the new stable order is better than the old. It is in the best interests of all aboard if an airliner is in a stable flight regime. Or is it? Not all stable states are equal. Ask any pilot whether a "flat spin" is a desirable state. Stable, yes, but desirable, no! Bankruptcy and death are also examples of stable states.
Research demonstrates that systems with an order of complexity as small as three elements, with two interconnections per element, can produce chaotic behavior. In terms of a product or organization, this means unexpected behavior (good or bad), unintended consequences, and unpredictability. Not all such systems will produce chaos, but the possibility certainly exists.
How we think about systems
Most managers today are singularly ill equipped to deal with complex systems, whether that system is their organization or the products it produces. The root of the problem is the way we were trained to think about problems. Our basis for solving problems, and thinking about systems, is reductionism and analysis. In other words, break the system down into smaller elements that can easily be analyzed, rather than the larger entity that cannot.
There is a small problem with that approach: the very interrelationships and connections that make the system behave as a system are lost in the breaking down. You can analyze tires, engines and transmissions forever and not come up with the system "automobile" or its emergent property of transportation.
How good would your physician be if he or she had an infinite knowledge of the cells of the human body, but had not a clue how they worked together or how they malfunctioned?
Without using an overarching way of looking at the system, there is little possibility of understanding it. And without understanding a system there is very little possibility of reengineering it to do anything useful. One of the serious errors made by the reengineering movement was the notion that they could build a new system tabula rasa, without regard for the old system. Lacking understanding of the old system (its function, purpose, design, operation and value to the stakeholders) makes it very difficult, if not impossible, to architect a new system.
What to do?
The modern business organization is simply too complex to succumb to simple analysis, and the more complex it is, the more likely it is to discover a complexity that can degenerate into an undesirable "flat spin" state. Fortunately, there are some principles that will help when evaluating an organizational system, or designing a new one:
Define and manage the interfaces. An "interface" is a point where two elements come together and exchange something. In other words, it concerns the interconnections and interrelationships that make up a system. The most important key is to realize that it is in designing and managing those interfaces that one gains the greatest leverage, and faces the greatest risks.
Organize to reduce complexity. If the "real" way things get done is considerably different from the "right way" found in a policy manual, then it is a sure bet that the interfaces are mismanaged and/or are too complex. People will find a simplified way to "work around" a bad system, but they should not have to do so. Furthermore, if they use "work arounds" then there is an increased probability of a "flat spin" behavior emerging: stable, but destructive. It is well recognized that a poor design, i.e. one that is too complex, may well lead to greater, undesirable complexity when informal attempts to simplify things are made.
Design it simple, stupid (DISS). A high degree of complexity is necessary in many of our organizations, but that does not mean that the relationship of elements of the overall system must be complex. The goal is to create an organization in which the elements have a high order of internal complexity, and a low order of external complexity. Each element, in other words, must be as independent as possible. A robust and responsive organization will result.
Make interfaces clear and well defined. Interfaces that are ambiguous, gray, subject to interpretation will result in variation in behavior. High complexity and ill-defined interfaces can be described in a single word: disaster.
Do not over-optimize the system. It is a natural tendency for managers to require all elements to optimize their performance. There is a simple rule that applies to all complex systems: not all elements of a computer system can be optimized; at least one element must be sub-optimized to optimize the entire system. This advice flies in the face of what many people believe to be true. However, consider this true situation. A company required all departments to optimize by attaining the lowest costs possible. The travel office was required to obtain the lowest air fares possible. The optimization of the travel office meant that expensive engineers, marketing people, and executives either had to endure many hours of layover at airports, or took flights that spent all day to arrive at a destination because of all the intermediate stops. The travel office optimization, done according to the general rule for the entire organization, cost a huge amount of money before it was corrected.Or consider the urge to optimize the "big three" of program management: cost, schedule and performance. At least one of these must be sub-optimized if the entire program is to be optimized. Whoever came up with the notion of "cost as an independent variable" did not understand that the big three are interdependent, and none is truly independent. Among the big three, there are no independent variables because of their mutual dependence.
Leave details of managing sub-elements to the specialists who know how. A good higher level manager understands that it is the interfaces that concern them, and not the day-to-day operation of the elements of the system. There are significant differences between the thinking required at higher and lower levels of complexity within the organization (Rechtin and Maier, 1997).
Manage perturbations and change as a system. The mark of a good system is that it is robust to challenges, perturbations and change. Systems, including organizations, will respond to these threats in one of three manners (Figure 2a). The trajectory to the changed state in Figure 2a is that which is implicitly, if erroneously, modeled by all too many people. Something happens, then by some process of a miraculous immaculate metamorphosis the new state is achieved. Real systems behave differently.
Figure 2a Unrealistic trajectory to changed state
The approach in Figure 2b shows what happens in many organizations. The perturbation occurs and, for one reason or many, the organization jumps into chaos. Massive disruption occurs, and there is even danger of the organization settling into a new state that is a "flat spin" rather than something desirable. This situation is called subcritically damped change.
Figure 2b Subcritically damped change
Figure 2c depicts a situation that occurs in many organizations. There is so much resistance and foot dragging that the change to the system does not occur on time or as planned. This situation is called overcritically damped change.
Figure 2c Overcritically damped change
The ideal practical situation is depicted in Figure 2d. This is critically damped change. So how do we achieve this approach? In practice, we do not. Systems tend to act like Figures 2b or 2c, rather than 2d. According to Rechtin and Maier (1997), it is preferable to implement several intermediate stable states on the way to the final stable state. Each time change is applied, there will be some familiar anchor in the old system that people can live with.
Figure 2d Critically damped change
Physicists and engineers who study feedback controlled systems call this approach quasi-static change (Figure 3). They know that such systems may go into either chaos or some "flat spin" regime if perturbed too much at one time. However, if the system is perturbed a little bit at a time, and the minor perturbations that result from each change are allowed to die out before the next change is made, the end state can be reached with no massive and destructive upheavals. Done otherwise, you will get to a new stable state, to be sure, but there is no telling what that state will be like.
Figure 3 Quasi-static change
H. James HarringtonCEO Systemcorp, Quebec, Canada and California, USA
Joseph J. CarrNaval Air Systems Command, Patuxent River, Maryland, USA
Robert P. ReidKeller Graduate School and University of Maryland, USA
Rechtin, E. and Maier, M.W. (1997), The Art of Systems Architecting, CRC Press, Boca Raton, FL.