Error and Limitations

Human cognition is inherently fuzzy. Human performance is also fuzzy and mistakes are common if not inevitable, even with advanced skills and years of experience. It makes sense that there should be some slack in the evaluation of human performance and conduct. One of the common themes of storytelling is the incompetence of others and humans take pleasure in recounting the errors that others make.

An industry of litigation has emerged around human error and the pretense is that there are perfect humans who make no serious errors. The legal case for damages is built on the assumption of a standard of care and due diligence that exceeds the standards achieved in actual performance. If a surgeon amputates the wrong leg, a lawsuit against him is likely to succeed.

But surgeons, like all other humans, make mistakes everyday – they forget to do things; they jump to conclusions when there is too little evidence and fail to make decisions when there is enough evidence; they misinform patients; they write undecipherable notes; they get tired, irritable and impatient. The problems that physicians and surgeons face are universal human problems. They face a constant barrage of events that are complex and uncertain. Their tools and understanding are limited and their own needs are often neglected so that their performance is compromised. On the plus side, you can argue that, given their limitations, medical doctors do well most of the time, creating some order out of random and chaotic events. However, not all doctors do well all the time.

When humans make mistakes, they often claim: “I am only human.” Of course, that is a redundant statement since we already know that they are human, but the statement does suggest that someone, somehow expected them to perform at a superhuman level. The protest “I am only human” refers us to the principle that all humans have imperfect performance but judge others more harshly than they judge themselves. The indignant storyteller assumes the disguise of the perfect one who knows no error or sin.

A complex fantasy of superhuman performance emerges in every culture that supports the delusion that humans do better than they actually do. This is a collective self-deception on a grand scale. Leaders and aristocrats with various pedigrees are often given unearned prestige and superhuman abilities may be attributed to them. All humans, regardless of status, share basic tendencies and limitations. Inflated attribution will lead to disappointment sooner or later.

Self-deceiving and unrealistically high standards for others have a social value and appear in every human group. Claiming a high standard makes it easy to shame, blame and discredit others who make mistakes. High standards are used to motivate group members to work harder, compete and achieve more. In the best case, high standards operate as attractors that align individuals with learning experiences that can improve performance.

Another function of high standards is to support claims of elite groups that they possess special qualities that others cannot attain or can only attain by seeking membership in the elite group. Humans can be described as animals with material ambitions and moral aspirations whose performance inevitably fails to meet their own expectations, but they ignore their own limitations and deny their own errors. A more realistic view is that even the smartest, nicest humans have distinct limitations, will routinely make mistakes, and occasionally, one of their mistakes will have major and tragic consequences.

The National Aeronautics and Space Administration (NASA) in the US is a prototype of interacting groups of smart people who sometimes cannot get it right. In NASA, the smartest scientists and engineers collaborate on making space flights and other projects. NASA is also a showcase for US technology and has a major public relations responsibility. NASA failures are highly visible tragedies that are well-studied. When the regular orbital flights of NASA’s shuttle began, managers estimated the risk of failure to be 1 flight in 100,000. After the explosion of the shuttle, Challenger, in January 1986, Feynman declared that NASA exaggerated the reliability of its product to the point of fantasy. In 1988 when flights resumed, the revised estimated risk of catastrophic failure at 1 flight in 50.

After a decade of successful flights the estimate of risk was improved to 1 in 254 flights. The shuttle, Columbia, disintegrated on re-entry in 2003, and the risk estimate became 1 in 100. A piece of insulating foam fell off the fuel tank 82 seconds after liftoff and struck one wing edge with sufficient force to punch a hole in the wing. On re-entry, hot gases entered the wing causing progressive damage and the eventual disintegration of the shuttle. All astronauts perished. NASA teams worked for two years and spent hundreds of millions of dollars trying to fix the foam problem. When the next shuttle took off in July 2005, again pieces of insulating foam broke off the fuel tank two minutes after launch but drifted away in the thin atmosphere. The shuttle completed its mission, but NASA, displaying appropriate caution and concern, announced that further flights would be suspended until the problem had really been fixed.

The actual risk of catastrophic failure of the shuttle as of 2005 was 2 flights in 113 or 1 in 56.5 flights. In his report on cognitive problems at NASA after the Challenger disaster, Feynman stated:” It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could put a Shuttle up each day for 300 years expecting to lose only one, we could properly ask: “What is the cause of management's fantastic faith in the machinery?” We have also found that certification criteria used in Flight Readiness Reviews often develop a gradually decreasing strictness. The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious attempt to remedy them, or to delay a flight because of their continued presence. “ Feynman concluded that a successful technology requires that reality takes precedence over public relations, for nature cannot be fooled.

From Intelligence by Stephen Gislason.