If a factory is torn down but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a government, but the systematic patterns of thought that produced that government are left intact, the those patterns will repeat themselves.
Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situations messes. Managers do not solve problems, they manage mess.
A system is a set of things interconnected in such a way that they produce their own pattern of behavior over time.
One the one hand, we have been taught to analyze, to use our rational ability, to trace direct paths from cause to effect, to look at things in small and understandable pieces, to solve problems by acting on or controlling the world around us.
On the other hand, long before we were educated in rational analysis, we all dealt with complex systems. We have built up intuitively, without analysis, often without words, a practical understanding of how these systems work, and how to work with them.
Words and sentences must, by necessity, come only one at a time in linear, logical order. Systems happen all at once. They are connected not just in one direction, but in many directions simultaneously.
They are interrelated through the physical flow of food, and through an elegant set of regulating chemical signals. The function of this system is to break down food into its basic nutrients and to transfer those nutrients into the bloodstream, while discarding unusable wastes.
As the days get shorter in the temperate zones, a deciduous tree puts forth chemical messages that cause nutrients to migrate out of the leaves into the trunk and roots and that weaken the stems, allowing the leaves to fall.
A system’s function or purpose is not necessarily spoken, written, or expressed explicitly, except through the operation of the system. The best way to deduce the system’s purpose is to watch for a while to see how the system behaves.
A system generally goes on being itself, changing only slowly if at all, even with complete substitutions of its elements — as long as its interconnections and purposes remain intact.
If the interconnections change, the system may be greatly altered. It may even become unrecognizable, even though the same players are on the team. Change the rules from those of football to those of basketball, and you’ve got, a whole new ball game.
A change in purposes changes a system profoundly, even if every element and interconnection remains th same.
To ask whether elements, interconnections, or purposes are most important in a system is to ask an unsystematic question. All are essential. All interact. All have their roles. But the least obvious part of the system, its function or purpose, is often the most crucial determinant of the system’s behavior. Interconnections are also critically important. Changing relationships usually changes system behavior. The elements, the parts of the systems we are most likely to notice, are often (not always) least important in defining the unique characteristics of the system — unless changing an element also results in changing relationships or purpose.
Human beings have invented hundreds of stock-maintaining mechanisms to make inflows and outflows independent and stable. Reservoirs enable residents and farmers downriver to live without constantly adjusting their lives and work to a river’s varying flow. Banks enable you temporarily to earn money at a rate different from how you spend. Inventories of products along a chain from distributors to wholesalers to retailers allow production to proceed smoothly although customer demand varies, and allow customer demand to be filled even though production rates vary.
The time it takes for an exponentially growing stock to double in size, the “doubling time,” equals approximately 70 divided by the growth rate (expressed as a percentage).
Example: If you put $100 in the bank at 7% interest per year, you will double your money in 10 years ( 70/7 = 10). If you get only 5% interest, your money will take 14 years to double.
You’ll be thinking not in terms of a static world, but a dynamic one. You’ll stop looking for who’s to blame; instead you’ll start asking, “What’s the system?”
It isn’t because the car dealer is stupid. It’s because she is struggling to operate in a system in which she doesn’t have, and can’t have, timely information and in which physical delays prevent her actions from having an immediate effect on inventory. She doesn’t know what he customers will do next. When they do something, she’s not sure they’ll keep doing it. When she issues an order, she doesn’t see an immediate response. This situation of information insufficiency and physical delays is very common.
Delays are pervasive in systems, and they are strong determinants of behavior. Changing the length of a delay may make a large change in the behavior of a system.
That very large system, with interconnected industries responding to each other through delays, entraining each other in their oscillations, and being amplified by multipliers and speculators, is the primary cause of business cycles. Those cycles don’t come from presidents, although presidents can do much to ease or intensify the optimism of the upturns and the pain of the downturns. Economies are extremely complex systems; they are full of balancing feedback loops with delays, and they are inherently oscillatory.
If the land mechanism as a whole is good, then every part is good, whether we understand it or not. If the biota, in the course of aeons, has built something we like but do not understand, then who but a fool would discard seemingly useless parts? To keep every cog and wheel is the first precaution of intelligent tinkering.
A set of feedback loops that can restore or rebuild feedback loops is resilience at a still higher level — meta-resilience, if you will. Even higher meta-meta-resilience comes from feedback loops that can learn, create, design, and evolve ever more complex restorative structures. Systems that can do this are self-organizing.
JIT deliveries of products to retailers or parts to manufacturers have reduced inventory instabilities and brought down costs in many industries. The JIT model also has made the production system more vulnerable, however, to perturbations in fuel supply, traffic flow, computer breakdown, labor availability, and other possible glitches.
Complex systems can evolve from simple systems only if there are stable intermediate forms. The resulting complex forms will naturally be hierarchical. That may explain why hierarchies are so common in the systems nature presents to us. Among all possible complex forms, hierarchies are the only ones that have had the time to evolve.
The original purpose of a hierarchy is always to help its originating subsystems do their jobs better. This is something, unfortunately, that both the higher and the lower levels of a greatly articulated hierarchy easily can forget. Therefore, many systems are not meeting our goals because of malfunctioning hierarchies.
If a team member is more interested in personal glory than in the team winning, he or she can cause the team to lose. If a body cell breaks free from its hierarchical function and starts multiplying wildly, we call it a cancer.
Just as damaging as sub-optimization, of course, is the problem of too much central control. If the brain controlled each cell so tightly that the cell could not perform its self-maintenance functions, the whole organism could die. If central rules and regulations prevent students or faculty from exploring fields of knowledge freely, the purpose of the university is not served. The coach of a team might interfere with the on-the-spot perceptions of a good player, to the detriment of the team.
In our heads, we can keep track of only a few variables at one time. We often draw illogical conclusions from accurate assumptions, or logical conclusions from inaccurate assumptions. Most of us, for instance, are surprised by the amount of growth an exponential process can generate. Few of us can intuit how to damp oscillations in a complex system.
When a systems thinker encounters a problem, the first thing he or she does is look for data, time graphs, the history of the system. That’s because long-term behavior provides clues to the underlying system structure. And structure is the key to understanding not just what is happening, but why.
The structure of a system is its interlocking stocks, flows, and feedback loops. Structure determines what behaviors are latent in the system.
These explanations give you no ability to predict what will happen tomorrow. They give you no ability to change the behavior of the system — to make the stock market less volatile or a more reliable indicator of the health of corporations or a better vehicle to encourage investment, for instance.
Most economic analysis goes one level deeper, to behavior over time. Econometric models strive to find the statistical links among past trends in income, savings, investment, government spending, interest rates, output, or whatever, often in complicated equations.
These behavior-based models are more useful than event-based ones, but they still have fundamental problems. First, they typically overemphasize system flows and underemphasize stocks. Economists follow the behavior of lows, because that’s where the interesting variations and most rapid changes in systems show up. Economic news reports on the national production (flow) of goods and services, the GNP, rather than the total physical capital (stock) of the nation’s factories and farms and businesses that produce those goods and services. But without seeing how stocks affect their related flows through feedback processes, one cannot understand the dynamics of economic systems or the reasons for their behavior.
Linear relationships are easy to think about: the more the merrier. Linear equations are solvable, which makes them suitable for textbooks. Linear systems have an important modular virtue: you can take them apart and put them together again — the pieces add up.
Nonlinear systems generally cannot be solved and cannot be added together. Nonlinearity means that the act of playing the game has a way of changing the rules. That twisted changeability makes nonlinearity hard to calculate, but it also creates rich kinds of behavior that never occur in linear systems.
A nonlinear relationship is one in which the cause does not product a proportional effect. The relationship between cause and effect can only be drawn with curves or wiggles, not with a straight line.
There are only boundaries of word, thought, perception, and social agreement — artificial, mental-model boundaries.
The greatest complexities arise exactly at boundaries. Disorderly, mixed-up borders are sources of diversity and creativity.
Whether it’s important to think about the full flow from mine to dump, or as industry calls it, “from cradle to grave,” depends on who wants to know, for what purpose, over how long.
When we think in terms of systems, we see that a fundamental misconception is embedded in the popular term “side-effects.” This phrase means roughly “effects which I hadn’t foreseen or don’t want to think about.” Side-effects no more deserve the adjective “side” than does the “principal” effect. It is hard to think in terms of systems, and we eagerly warp our language to protect ourselves from the necessity of doing so.
If we’re to understand anything, we have to simplify, which means we have to make boundaries. Often that’s a safe thing to do.
Where to draw a boundary around a system depends on the purpose of the discussion — the questions we want to ask.
This “my model is bigger than your model” game results in enormously complicated analyses, which produce piles of information that may only serve to obscure the answers to the questions at hand.
It’s a great art to remember that boundaries are of our own making, and that they can and should be reconsidered for each new discussion, problem, or purpose. It’s s challenge to stay creative enough to drop the boundaries that worked for the last problem and to find the most appropriate set of boundaries for the next question. It’s also a necessity, if problems are to be solved well.
Understanding layers of limits and keeping an eye on the next upcoming limiting factor is not a recipe for perpetual growth, however. For any physical entity in a finite environment, perpetual growth is impossible. Ultimately, the choice is not to grow forever but to decide what limits to live within.
I realize with fright that my impatience for the re-establishment of democracy had something almost communist in it; or, more generally, something rationalist. I had wanted to make history move ahead in the same way that a child pulls on a plant to make it grow more quickly.
Bounded rationality means that people make quite reasonable decisions based on the information they have. But they don’t have perfect information, especially about more distant parts of the system.
We do our best to further our own nearby interests in a rational way, but we can take into account only what we know. We don’t know what others are planning to do, until they do it. We rarely see the full range of possibilities before us. We often don’t foresee (or choose to ignore) the impacts of our actions on the whole system. So instead of finding a long-term optimum, we discover within our limited purview a choice we can live with for now, and we stick to it, changing our behavior only when forced to.
We live in an exaggerated present — we pay too much attention to recent experience and too little attention to the past, focusing on current events rather than long-term behavior. We discount the future at rates that make no economic or ecological sense. We don’t give all incoming signals their appropriate weights. We don’t let in at all news we don’t like, or information that doesn’t fit our mental models. Which is to say, we don’t even make decisions that optimize our own individual good, much less the good of the system as a whole.
Economic theory as derived from Adam Smith assumes first that homo economicus acts with perfect optimality on complete information, and second that when many of the species homo economicus do that, their actions add up to the best possible outcome for everybody.
In your new position, you experience the information flows, the incentives and disincentives, the goals and discrepancies, the pressures — the bounded rationality — that goes with that position. It’s possible that you could retain your memory of how things look from another angle, and that you burst forth with innovations that transform the system, but it’s distinctly unlikely. If you become a manager, you probably will stop seeing labor as a deserving partner in production, and start seeing it as a cost to be minimized. If you become a financier, you probably will over invest during booms and underinvest during busts, along with all the other financiers. If you become very poor, you will see the short-term rationality, the hope, the opportunity, the necessity of having many children.
To paraphrase a common prayer: God grant us the serenity to exercise our bounded rationality freely in the systems that are structured appropriately, the courage to restructure the systems that aren’t, and the wisdom to know the difference!
Rational elites know everything there is to know about their self-contained technical or scientific worlds, but lack a broader perspective. They range from Marxist cadres to Jesuits, from Harvard MBAs to army staff officers. They have a common underlying concern: how to get their particular system to function. Meanwhile, civilization becomes increasingly directionless and incomprehensible.
The most effective way of dealing with policy resistance is to find a way of aligning the various goals of the subsystems, usually by providing an overarching goal that allows all actors to break out of their bounded rationality. If everyone can work harmoniously toward the same outcome (if all feedback loops are serving the same goal), the results can be amazing. The most familiar examples of this harmonization of goals are mobilizations of economies during wartime, or recovery after war or natural disaster.
3 ways to avoid the tragedy of the commons:
- Educate and exhort. Help people to see the consequences unrestrained use of the commons. Appeal to their morality. Persuade them to be temperate. Threaten transgressors with social disapproval or eternal hellfire.
- Privatize the commons. Divide it up, so that each person reaps the consequences of his or her own actions. If some people lack the self-control to stay below the carrying capacity of their own private resource, those people will harm only themselves and not others.
- Regulate the commons. “Mutual coercion, mutually agreed upon.” Regulation can take many forms, from outright bans on certain behaviors to quotas, permits, taxes, incentives. To be effective, regulation must be enforced by policing and penalties.
Privatization makes sure that gains and losses fall on the same decision maker. The owner still may abuse the resource, but now it takes ignorance or irrationality to do so.
And to complete this tragic archetype, the desired state of the system is influenced by the perceived state. Standard aren’t absolute. When perceived performance slips, the goal is to allowed to slip. “Well, that’s about all you can expect.” “Well, we’re not doing much worse than we were last year.” “Well, look around, everybody else is having trouble too.”
Even escalating in a good direction can be a problem, because it isn’t easy to stop. Each hospital trying to outdo the others in up-to-date, powerful, expensive diagnostic machines can lead to out-of-sight health care costs. Escalation in morality can lead to holier-than-thou sanctimoniousness. Escalation in art can lead from baroque to rococo to kitsch.
The only other graceful way out of the escalation system is to negotiate a disarmament. That’s a structural change, an exercise in system design. It creates a new set of balancing controlling loops to keep the competition in bounds (parental pressure to stop the kids’ fight; regulations on the size and placement of advertisements; peace-keeping troops in violence-prone areas). Disarmament agreements in escalation systems are not usually easy to get, and are never very pleasing to the parties involved, but they are much better than staying in the race.
Because the poor can afford to buy only small quantities of (of food, fuel, seed, fertilizer), they pay the highest prices. Because they are often unorganized and inarticulate, a disproportionately small part of government expenditure is allocated to their needs. Ideas and technologies come to them last. Disease and pollution come to them first. They are the people who have no choice but to take dangerous, low-paying jobs, whose children are not vaccinated, who live in crowded, crime-prone, disaster-prone areas.
There are many devices to break the loop of the rich getting richer and the poor getting poorer: tax laws written (unbeatably) to tax the rich at higher rates than the poor; charity; public welfare; labor unions; universal and equal health care and education; taxation on inheritance (a way of starting the game over with each new generation).
The way out:
- Diversification, which allows those who are losing the competition to get out of that game and start another one.
- Strict limitation on the fraction of the pie any one winner may win (antitrust laws).
- Policies that level the playing field, removing some of the advantage of the strongest players or increasing the advantage of the weakest.
- Policies that devise rewards for success that do not bias the next round of competition.
There is a huge discrepancy between your desired and actual state, and there are very few options available to you for closing that gap. But one thing you can do is take drugs. The drugs do nothing to improve your real situation — in fact, they likely make it worse. But the drugs quickly alter your perception of your state, numbing your senses and making you feel tireless and brave.
Why does anyone enter the trap? First, the intervenor may not foresee that the initial urge to help out a bit can start a chain of events that leads to ever-increasing dependency, which ultimately will strain the capacity of the intervenor. The American health-care system is experiencing the strains of that sequence of events.
Second, the individual or community that is being helped may not think through the long-term loss of control and the increased vulnerability that go along with the opportunity to shift a burden to an able and powerful intervenor.
Addiction is finding a quick and dirty solution to the symptom of the problem, which prevents or distracts one from the harder and longer-term task of solving the real problem.
Withdrawal means finally confronting the real (and usually much deteriorated) state of the system and taking the actions that the addiction allowed one to put off.
The way out:
Again, the best way out of this trap is to avoid getting in. Beware of symptom-relieving or signal-denying policies or practices that don’t really address the problem. Take the focus off short-term relief and put it on long-term restructuring.
Wherever there are rules, there is likely to be rule beating. Rule beating means evasive action to get around the intent of a system’s rules — abiding by the letter, but not the spirit, of the law. Rule beating becomes a problem only when it leads a system into large distortions, unnatural behaviors that would make no sense at all in the absence of the rules.
Departments of governments, universities, and corporations often engage in pointless spending at the end of the fiscal year just to get rid of money — because if they don’t spend their budget this year, they will be allocated less next year.
The way out of the trap, the opportunity, is to understand rule beating as useful feedback, and to revise, improve, rescind, or better explain the rules. Designing rules better means foreseeing as far as possible the effects of the rules on the subsystems, including any rule beating they might engage in, and structuring the rules to turn the self-organizing capabilities of the system in a positive direction.
Systems, like the 3 wishes in the traditional fairly tale, have a terrible tendency to produce exactly and only what you ask them to produce. Be careful what you ask them to produce.
If the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending. It may or may not produce national security. In fact, security may be undermined if the spending drains investment from other parts of the economy, and if the spending goes for exorbitant, unnecessary, or unworkable weapons.
These examples confuse effort with result, one of the most common mistakes in designing systems around the wrong goal.
If you define the goal of a society as GNP, that society will do its best to produce GNP. It will not produce welfare, equity, justice, or efficiency unless you define a goal and regularly measure and report the state of welfare, equity, justice, or efficiency.
You can often stabilize a system by increasing the capacity of a buffer. But if a buffer is too big, the system gets inflexible. It reacts too slowly. And big buffers of some sorts, such as water reservoirs or inventories, cost a lot to build or maintain. Businesses invented JIT inventories, because occasional vulnerability to fluctuations or screw-ups is cheaper than certain, constant inventory costs — and because small-to-vanishing inventories allow for more flexible response to shifting demand.
Delays that are too short cause overreaction, “chasing your tail,” oscillations amplified by the jumpiness of the response. Delays that are too long cause damped, sustained, or exploding oscillations, depending on how much too long.
A complex system usually has numerous balancing feedback loops it can bring into play, so it can self-correct under different conditions and impacts. Some of those loops may be inactive much of the time but their presence is critical to the long-term welfare of the system.
One of the big mistakes we make is to strip away these “emergency” response mechanisms because they aren’t often used and they appear to be costly.
Another is in encroaching on our own time for personal rest, recreation, socialization, and meditation.
Price is the central piece of information signaling both producers and consumers. The more the price is kept clear, unambiguous, timely, and truthful, the more smoothly markets will operate. Prices that reflect full costs will tell consumers how much they can actually afford and will reward efficient producers.
Give the people who want to distort market-price signals the power to influence government leaders, allow the distributors of information to be self-interested partners, and none of the necessary balancing feedbacks work well. Both market and democracy erode.
Power over the rules is real power. That’s why lobbyists congregate when Congress writes laws, and why the Supreme Court, which interprets and delineates the Constitution — the rules for writing the rules — has even more power than Congress.
The most stunning thing living systems and some social systems can do is to change themselves utterly by creating whole new structures and behaviors. In biological systems that power is called evolution. In human economies it’s called technical advance or social revolution. In systems lingo it’s called self-organization.
That pattern, and the rules for replicating and rearranging it, has been constant for some thing like 3B years, during which it has spewed out an unimaginable variety of failed and successful self-evolved creatures.
The source of variety is human creativity (whatever that is) and the selection mechanism can be whatever the market will reward, or whatever governments and foundations will fund, or whatever meets human needs.
“To make profits,” most corporations would say, but that’s just a rule, a necessary condition to stay in the game. What is the point of the game? To grow, to increase market share, to bring the world more and more under the control of the corporation, so that its operations becomes ever more shielded from uncertainty. That corporate goal — to engulf everything — is the goal of cancer too. Actually it’s the goal of every living population — and only a bad one when it isn’t balanced by higher-level balancing feedback loops that never let an upstart power-loop-driven entity control the world.
Paradigm: The mindset out of which the system — its goals, structure, rules, delays, parameters — arises.
It doesn’t matter how the tax law of a country is written. There is a shared idea in the minds of the society about what a “fair” distribution of the tax load is. Whatever the laws say, by fair means or foul, by complications, cheating, exemptions or deductions, by constant sniping at the rules, actual tax payments will push right up against the accepted idea of “fairness.”
There is yet one leverage point that is even higher than changing a paradigm. That is to keep oneself unattached in the arena of paradigms, to stay flexible, to realize that no paradigm is “true,” that every one is a tremendously limited understanding of an immense and amazing universe that is far beyond human comprehension. It is to let go into not-knowing, into what the Buddhists call enlightenment.
The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one. The commonest kind of trouble is that it is nearly reasonable, but not quite. Life is not an illogicality; yet is a trap for logicians. It looks just a little more mathematical and regular than it is.
The truth was, we didn’t even follow our advice. We gave learned lectures on the structure of addiction and could not give up coffee. We knew all about the dynamics of eroding goals and eroded our own jogging programs. We warned against the traps of escalation and shifting the burden and then created them in our own marriages.
Changing them is not as simple as saying “now all change,” or of trusting that he who knows the good shall do the good.
This guideline is deceptively simple. Until you make it a practice, you won’t believe how many wrong turns it helps you avoid. Starting with the behavior of the system forces you to focus on facts, not theories. It keeps you from falling too quickly into your own beliefs or misconceptions, or those of others.
When we draw structural diagrams and then write equations, we are forced to make our assumptions visible and to express them with rigor. We have to put every one of our assumptions about the system out where others (and we ourselves) can see them. Our models have to be complete, and they have to add up, and they have to be consistent. Our assumptions can no longer slide around (mental models are very slippery), assuming one thing for purposes of one discussion and something else contradictory for purposes of the next discussion.
Remember, always, that everything you know, and everything everyone knows, is only a model. Get your model out there where it can be viewed. Invite others to challenge your assumptions and add their own.
Decision makers can’t respond to information they don’t have, can’t respond accurately to information that is inaccurate, and can’t respond in a timely way to information that is late.
Honoring information means above all avoiding language pollution — making the cleanest possible use we can of language. Second, it means expanding our language so we can talk about complexity.
Language can serve as a medium through which we create new understandings and new realities as we begin to talk about them. In fact, we don’t talk about what we see; we see only what we can talk about. Our perspectives on the world depend on the interaction of our nervous system and our language — both act as filters through which we perceive our world.
A society that talks incessantly about “productivity” but that hardly understands, much less uses, the word “resilience” is going to be come productive and not resilient. A society that doesn’t understand or use the term “carrying capacity” will exceed its carrying capacity.
The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn. The way you learn is by experiment — by trial and error, error, error.
Systems thinking can only tell us to do that. It can’t do it. We’re back to the gap between understanding and implementation. Systems thinking by itself cannot bridge that gap, but it can lead us to the edge of what analysis can do and then point beyond — to what can and must be done by the human spirit.