ASK OCE — July 20, 2006 — Vol. 1, Issue 10
ASK OCE Interview
Five Questions for Dr. Henry Petroski
Dr. Henry Petroski, a professor of both civil engineering and history at Duke University, is one of the nation's foremost writers on engineering. In addition to his scholarly publications and textbooks, he has authored thirteen books for general audiences on engineering subjects ranging from suspension bridges to the pencil.
In your recent book Success through Failure: The Paradox of Design
, you write that failure-based design is more likely to succeed than design based on a successful model. That contradicts the conventional wisdom "if it ain’t broke, don’t fix it." Where does the conventional wisdom break down?
All conventional wisdom has an element of truth to it, but good design requires more than an element of truth — it requires an ensemble of correct assumptions and valid calculations. Just because something ain’t broke doesn’t mean that it won't break eventually. The responsibility of designers is to have a good idea when something will break and "fix" it before that happens. This is the essence of responsible maintenance, and a proper maintenance schedule should be part of a good design.
When something "ain’t broke," we can rightly call it a success. Unfortunately, the temptation is to base subsequent designs on that successful model. If those new designs were identical copies of what "ain’t broke" and if they were not expected to have a life beyond it, then we could expect them to be successful also. But it is human nature to infer from continued success even greater future success, well beyond what has been proven. It is also natural to want to assume that continued success implies a basic soundness to the successful design. This, in turn, can lead to a belief that the successful design is over designed, and so subsequent designs are made with what amounts to a lower factor of safety. The evolution (or devolution) of such designs can be expected to continue until something that “ain’t broke” today does break tomorrow.
Success can mask latent flaws in a design. Consider the Titanic. Had it not struck the iceberg and sunk on its first Atlantic crossing, the design of the "unsinkable" ship would likely have served as a model for subsequent transatlantic ocean liners, which would likely have been built to carry more passengers and to sail faster, perhaps with fewer lifeboats. As long as those ships experienced no failure, they would continue to grow and multiply. They would also continue to incorporate the Titanic’s basic design assumptions, which we now know were severely flawed, until one or more of the latent flaws caused a tragic failure to occur.
You call failure is a "unifying principle in the design of things large and small, hard and soft, real and imagined." As a university professor, how do you pass this along to your students? Do you emphasize the study of failure more than your colleagues do?
I incorporated the idea of failure explicitly in all the courses I teach. I emphasize that virtually every engineering calculation is ultimately a failure calculation, because without a failure criterion against which to measure the calculated result, it is a meaningless number. Engineering calculations of stress, temperature, etc., are meaningful only in that they tell us how far from failure a particular design is.
I employ case studies of failure into my courses, emphasizing that they teach us much more than studies of success. It is not that success stories cannot serve as models of good design or as exemplars of creative engineering. They can do that, but they cannot teach us how close to failure they are. And that is what an engineer must always know.
How can a large technical organization such as NASA, which has thousands of engineers, incorporate an understanding of the role of failure in the design process into its professional development approach?
A large organization can emphasize to its engineers that talking and thinking about failure are not signs of pessimism but are ways to keep the principal goal — the obviation of failure — in the forefront. Success is best achieved by being fully aware of what can go wrong in a design — and designing against its happening.
Case studies of failure should be made a part of the vocabulary of every engineer so that he or she can recall or recite them when something in a new design or design process is suggestive of what went wrong in the case study. Just being a naysayer or pessimist in a design review will not convince other engineers, who will naturally want a rational argument for why a design or design process is suspected of being flawed. Being able to describe a case study, even one that is outside the field of immediate interest, can be an effective way of getting everyone around the table to realize that they may be going down an analogously flawed path.
When a public organization like NASA, which receives its funding from the taxpayers, experiences a failure, it ends up in newspapers and Congressional oversight reports. Given its public visibility and accountability, what kinds of things can NASA do to increase its capacity to function as an organization that learns from failure?
Laypersons understand from their own personal life experiences that they take risks everyday, and that they sometimes fail. Everyone understands that engineers are people and that organizations are groups of people, and so everyone understands that they are susceptible to the same shortcomings as an ordinary citizen. However, we expect engineers and managers, by their professional education, training, and experience to recognize their professional limitations and so be vigilant in designing things of great value and consequence. The public rightly expects engineers and groups of engineers to be more careful and deliberate than the common person walking down the street or driving down the highway. People also understand that there is an element of luck in daily living and in great projects. Just as we can be killed by a stroke of lightning, so a great ship can be sunk by a chance encounter with an iceberg. So accidents will happen. When they do, it is important to make clear that everything humanly and reasonably possible had been done to prevent the accident in the first place, and to explain how the lessons learned from that accident will not be forgotten in future endeavors.
You mention that NASA faces a "communication gap between one generation and the next" as it prepares to return to the moon at the same time that it confronts a wave of retirements. What are some strategies for bridging this gap? Are there any organizations that NASA can learn from in this regard?
Among the ways in which a professional generation gap might be narrowed is to establish means for the older generation to convey its war stories and hard-won experience to the younger. This may be done in a formal or informal way, but there should be a clear understanding among the young that the experiences of older engineers with earlier generation designs are not irrelevant to the latest state of the art. Fundamental design assumptions (and understood limitations) do not become obsolete in a system that develops over decades. Indeed, they can be among the most critical pieces of knowledge that the younger engineers can inherit. The lessons of past failures should also be passed on explicitly to the next generation.