An interesting post on unintended consequences.
Sorry for the light posting, but I’m back in Florida, in the last throes of getting the house on the market.
An interesting post on unintended consequences.
Sorry for the light posting, but I’m back in Florida, in the last throes of getting the house on the market.
Comments are closed.
I’m struck by the eerie similarities between this episode and what happened at Three Mile Island. In fact the article mentions this but not in enough detail. At TMI one of the main culprits was that a valve closed sensor was on the SOLENIOD used to control the valve that let steam out of the pressurizer, not on the valve itself. Thus when the valve stuck open the sensor reported the valve closed because the solenoid had retracted correctly. Assuming the valve was closed made the operators mistakenly think the pressurizer was “solid” (full of water, when in fact it was not). In Lawerence / Andover it was also a sensor that was reading the right pressure information from the wrong main!
This is why passively safe systems, which are safe by nature instead of by operation, are to be preferred.
Nuclear, unfortunately, went down the road of light water reactors. These are great for submarines, not so great for powerplants. It’s been argued for decades that this choice doomed the nuclear industry to failure, and I agree with that for mutually reinforcing reasons of safety and cost.
What they should have done is design with safety/cost in mind. This means excluding volatile materials from the nuclear island. The size of the containment building of a LWR (which costs more than the reactor proper) is determined by the volume of steam it has to contain in an accident. No water means no steam, and the containment building can be radically smaller and cheaper.
Another common thread in these is the nature of the human/machine interface. Humans are very bad about being thrust suddenly in to a complex environment with warning bells going off and trying to make sense of what happened and what to do. They are very good at noticing changes in an environment where they are paying attention. So if your shower starts changing temperature — you notice. If you’re looking out the window of a car and the car starts drifting to the left, you notice. But the key in that is that the human is “in the loop”. they have “situational awareness”, and a memory of what is going on and how they got there. If you flip a switch and something bad starts to happen, your first thought is likely to be “flip that back and see if it gets better” — but that implies a memory that you had flipped the switch in the first place.
A good control system that relies on human override to the automatic controls has to keep the human “busy enough to pay attention”. A conventional human-operated car is a good example. If you let go of the wheel for more than a short time, bad things start to happen — same thing if you look away from the road. This maintains your alertness and keeps you aware of the situation. A car in which you are encouraged to turn your attention elsewhere until some warning indicator goes off will have a short period where you sort of “wake up” and have to start paying attention, which is likely to come along with confusion about what the right action is to take. And the more complex the range of control inputs — the more ‘choices’ the operator has about correct action — the more likely the inputs are to be in error.
Often the apparent goal of automation engineers is to design a system which takes *no* human attention, until it needs *rapid, correct, and intelligent* human attention. In such a system, the fault lies not with the widget that eventually goes wrong, or with the operator who did the wrong thing in an emergency. The fault lies with the automation designer, who designed a system that does not pay attention to the way human beings operate.
Hmm… Interesting implications for Tesla and other autonomous driving efforts.
What I can’t get my mind around is that they did the cut-over without anyone monitoring the pressure in the new line and in direct communication with whoever was operating the valve.
What happened was exactly what would have happened if the pilot lines had failed or become blocked for any reason. It’s what they build pressure relief valves for. The problem here is that a large volume of flammable gas at a very low pressure is a challenge.
The structures Hayes calls receivers are properly called gasometers or gas holders and are not simply tanks. Their internal volume actually expands and contracts at nearly constant pressure as gas is added or withdrawn.
https://en.wikipedia.org/wiki/Gas_holder
A weighted movable piston inside, raises and lowers to control pressure without venting the gas.
I have a relative that worked for some water utility down in California. He had been there a long time but before he retired, they had started to automate a lot of the functions that people used to do. He said the the younger people would have no idea what to do if the computers went out because they didn’t know what valves and gates were where and what they did.
The linked example seems a lot like this. A lot of skilled people and a high tech system but no one with a detailed enough understanding of how the big picture system works in order to anticipate the accident.