Dumar and Waring’s article discusses the recovery and
investigation process that followed the Challenger launch accident. It was interesting to read about the
progress of the investigation, which quickly escalated into bold conclusions and
finger pointing. Although a faulty
O-ring seemed to be the primary cause, poor communication and a demanding
flight schedule where also outlined as flaws in the system. What I found especially interesting in this
article was the numerical risk assessment developed by Bell Labs and the Air Force
that sought to aid decision-makers by providing probabilistic statements of
risk. This computer-aided system
traced causes of potential malfunctions to identify the likelihood of failure
in the various parts. As stated by
Will Willoughby, the head of the Agency’s quality office during Apollo,
“Statistics don’t account for anything.
They have no place in engineering analysis anywhere.” Could this be true? The article goes on to say, “NASA
engineers were uncomfortable with probabilistic thinking and argued that
meaningful risk numbers could not be assigned to something as complicated and
subject to changing stresses as the Space Shuttle.”
Rather than assigning probability estimates to parts, NASA
chose to use failure mode analysis that attempted to identify worst-case
problems. Not surprisingly,
statistical analysis tends to introduce political debate. This is seen by the projected
probabilities of the Shuttle’s boosters failing. The Space Shuttle Range Safety Ad Hoc Committee claimed that
the Shuttle’s boosters were likely to fail on 1 of 10,000 flights. Feynman found that the engineers
expected failure was in 1 of every 200 or 300 launches while the managers
expected failure was in 1 of every 100,000 flights. Feynman was thus justified in concluding the manager’s
grossly miscalculated as a result of “fantastic faith in machinery.” This further highlights the lack of
communication at the time. From my
understanding those involved in the investigation where extremely dedicated to
the launch program and worked relentlessly to understand exactly what went
wrong and how it could have been prevented. Although risk management is effective in a number of
situations I think you could argue certain situations have to many unknowns, which
hinder the proper calculation of probable chance.
At the same time, as seen in Cornell and Fischbeck’s
article, risk management is often a useful tool. This article highlights how taking into account the
probability of different events can be used to account for the likelihood of
risk in the primary event and how to adapt accordingly. This illustrates how evaluation of risk
can be an effective management tool.
It is truly amazing what can be known based on probable chance and how
variations influence risk. As
humans I find it interesting how we put to use probability. Even if the occurrence of something is
1 in 1,000 we still can’t help but think of the chance it could happen. Relating it back to the Shuttle launch,
could there have existed to many unknowns for statistical data to be useful in
decision-making? I believe
probability can play an influential role in almost anything, especially when
calculations can be made on smaller components and other influential factors
surrounding the event. This brings
to question how we interpret probability. What is an acceptable amount of risk
and how does this probability relate to the value of a human life? Obviously we willing face risk everyday
but at what point do we say it is just not worth it and how does possible
outcome factor into this calculation?
After reading all of these articles about risk, communication seems to be an underlying component. We have seen the effects of communication and the discussion of risk. As we saw in the movie, there was no way to communicate to stop the bombing. In last weeks articles the way we communicate can affect out emotions, which hindered or threw people into action. In this weeks article about the Challenger and the communication errors between the engineers and mangers. Would communicating errors really reduce risks from occurring though? We talked about this a lot last class. If as humans we knew the risk would we still take it? This made me think a lot about our discoveries and advancements. With every major failure or destruction there is new knowledge on a subject. I have been thinking a lot about medical testing with this and the risks involved behind it. This can be anywhere from a new drug to new treatments. I think I’ve been pondering this because a student I work with has autism and they are looking into new medical studies to “cure” autism. They openly know the risks and are still considering putting their child through this medical testing. That was way off topic, but still involves great risk to human life and would communicating these risks. The question still remains, when does the loss of human life out weight the knowledge gained?
ReplyDeleteWill, I agree with your idea that the number of unknowns in a given situation can affect the ability to adequately calculate the probability/chance of risk. I find it very intriguing that we can use these calculations of risk to accommodate or prepare for a specific event. I believe many times humans don't always acknowledge or put considerable thought into the risk they are taking but also that a small number are aware of the risk but have a feeling of invincibility. They do not think it will happen to them, that they will be the ones injured, killed, or affected by tragedy. I think many individuals do not care enough to identify possible accident scenarios and really consider the probability that they may occur. I also think it was interesting in the article by Pate-Cornell and Fishbeck when they discuss how the agency changed its attitude from "launch if proven safe" to "launch unless proven unsafe." Their article effectively showed, I believe, how risk analysis can be incredibly useful.
ReplyDeleteWill, I also found NASA’s aversion to numerical risk assessment to be interesting. I’ve always thought it was interesting how although statistical models are constructed under the premise of being absolutely accurate and rely entirely on math, different parties can come up with entirely different risk assessments of the same event. For example, Teledyne Energy Systems estimated the probability of failure to be 1 in 100 flights, whereas a study by Johnson Space Center put the probability of failure at 1 in 100,000 flights. These figures are dramatically different, yet they represent the same event. How, then, should these statistical projections be considered? It seems that even statistical models are not immune to political influences. These political influences can then go on and have an impact on human life. We have been spending a lot of time discussing the monetary value of human life. NASA was faced with the problem of considering a probabilistic value of human life. They were forced to determine at what point the risk of loss of human life outweighs the potential benefit from a successful mission. I found it alarming that probabilistic risk assessment was abandoned altogether during the lunar program after General Electric determined that the chance of landing on the moon successfully was less than five percent. These figures did not reflect the desired outcome, and so they were ignored. Engineers claimed that because of the complications involved with changing stresses experienced by the Space Shuttle, there was no way of accurately calculating risk figures. However, I realize that these decisions were complicated by differing probabilistic figures. If every risk assessment came back with less than a five percent chance of success, I am assuming that the first shuttle to the moon would have never left the earth. NASA must rely not only on mathematical models, but also on other means of evaluating risk.
ReplyDeleteErin, I agree that it is interesting how different groups can come up with different probability. However, I feel like the reason some groups get different probabilities lies in what assumptions they make when they do the calculations. For instance, when it comes down to doing the math, it is possible that they saw some factors to be "negligible". When different groups place emphasis on various factors it can definitely change the probability of failure. As for the moon launch. I know a 5% chance of success seems small, but look at the probability of winning the lottery, or discovering oil in your backyard. When you compare those odds to the odds of going to the moon it really puts it into perspective for you. Also, I feel that our patriotic duty to defeat the Communistic Russians allo2wed us to look beyond mere numbers. We had a patriotic duty as the embodiment of Freedom to spread Democracy to the moon before the Russians, or anyone else, could infect it with their inferior governmental beliefs, 'Merica.
ReplyDeleteIn regards to the question of whether or not stats “don’t account for anything”, I feel as though stats do account for a lot. While it is true that stats don’t necessarily “predict the future” (like if someone calculates a 99.9% chance of something being safe, but something horrible happens anyway, similar to the Challanger), stats tend to give someone a general idea of how likely something is to happen. After all, if NASA engineers calculated that the Challenger was 70% likely to have problems while out in space, I’m sure that the engineers would go back and keep refining the Challenger until that number went down (drastically).
ReplyDeleteI think that no matter what we do, we will always encounter risk (even if we don’t think about it). For example, simply walking across the street is risky since I might accidentally get run over by a car. But I think that there is an acceptable amount of risk, though it depends on what you’re doing. After all, if there was no acceptable risk, then the world would be a much more boring place to be. For the Challenger, I would say that given the stats presented prior to launching the space shuttle, that amount of risk would be acceptable. However, I do think that people should have taken more time and care to consider all possibilities of how the shuttle would react in certain situations, especially the O-ring.
When it comes to actualy using probability to determine a course of action, I think that the major determining factor that influences whether or not someone makes a decision, even knowing the risk, depends on how much someone cares about what is going on. Obviously, this is something that is difficult to be measured, and can mean different things in different circumstances. For instance, people who text and drive regularly simply do not care - I highly doubt people even think of the danger every time they get there phone out while driving. Its just not even a concious thing. And so, this lack of caring allows them to take risks. On the flip side, caring too much also may force people to take risks they otherwise may not have. While I agree with most of Jared's post above, I believe that it is not the assumptions people make about the calculations, but rather their own personal biases that influences how they read the outcomes of calculations. It does not have to be political in nature at all. People have opinions on just about everything. Even if you think you don't care about the outcome of a decision, I guarentee if you asign one outcome to side A of a coin and another outcome to side B of a coin and flip it, you will feel a pull to choose either A or B. A better way to manage risk, then, would be to also examine the people assesing risk, discovering their personal motivations and slight biases that may alter the results of any risk calculation they do.
ReplyDeleteI also found the statement "Statistics have no place in engineering analysis anywhere" to be fascinating. I think many of us have a tendency to think of stats, or any statement that involves numbers really, as being infallible truths when the reality is statistics can and are influence by politics just as much as any other subject. I guess this shows that human intuition and creativity cannot be replaced by even the most advanced computer program. That being said, I think the statement above is completely unreasonable. The failure of the statistical model to accurately predict the risk involved in the flight was a result of human error rather than the fault of statistics. What about all of the times statistics HAVE accurately predicted risk? Or predicted anything for that matter?
ReplyDelete