|
|
|
|
Blackout in a Power Station!!!!
What follows was published in the Telecom
Digest in mid December 1996 in response to questions on telecommunication in
the electricity industry. Subscriptions to the Telecom Digest can be made by
sending EMAIL to ptownson@massis.lcs.mit.edu. The original article can be
found on the Internet at http://hyperarchive.lcs.mit.edu/telecom-archive
By: Darryl
Smith <vk2tds@ozemail.com.au>
In February this year I was working in a
Power Station in the electrical section - OK - I was the electrical section -
almost - I was straight out of uni. I had an electrical engineering boss who
came with the power station and will leave if it ever gets shut down; but not
before; and a technical officer who was an expert in power systems protection.
And we had a uni student on a semester's work (Known as a cadetship).
Power Stations are a place which can be
very expensive to repair if things go wrong. Think about 200 tonnes of metal
spinning at 3000 RPM and then put a turbine on the end. That is the size of
thing I am talking of. And the generator is filled with Hydrogen gas to improve
its efficiency in cooling (650 MWatts capacity becomes 200 MWatts in air). To keep
the hydrogen high pressure oil is pumped onto the spinning shaft at each end to
keep the air out, and then the oil it put in a vacuum to remove the water and
oxygen from the oil. Oil is also used to cool the bearings; which get quite hot
given the weight of the generator rotor - and the bearing at one end is
insulated. IF the bearings get to hot, they melt, causing a short circuit
between the rotor and the shaft; and
you have a spinning metal in a magnetic
field and get the case magnetised (The case is iron and weighs over 100,000
Kg).
Demineralised water is used to cool the
hydrogen in the generator, and inside the copper conductors in the stator
(After all a generator is only 99 % efficient that means that 6.5 MWatts of
heat is produced !!!!!!)
In addition there is a boiler which is
about 11 stories tall with various cameras inside at $500,000 each which need
constant cooling or else they will burn.....
All the important stuff like oil pumps,
cooling pumps etc are all DC motors. And we have two of any important motors,
fed from different switchboards, and usually different batteries. By usually I
mean that is the normal case. Sometimes we need to take a battery bank out -
and have both switchboards supplied from the same battery. But each battery is constantly
on charge from a battery charger/ DC supply. These chargers have an automatic
changeover if they lose supply from their primary AC switchboard.
So for the 11kV system there are
- Unit A and B switchboards (with incoming
supplies from one of two locations) and able to be connected in the center.
- Station A and B switchboards in a similar
configuration.
In terms of DC we have the same sort of
thing running on 24, 50, 110 and 240 Volt DC batteries.
Back to February 1996 - my supervisor was
off for the day (we have a 9 day fortnight) and had just rung up to make sure
nothing had gone wrong. Of ourse not. That would happen when he hung up. The
Technical officer had just left the building too. And I was plant owner for switchboards
- The expert.
The lights in the admin building went dark
as did my computer terminal. Once they didn’t return after a few seconds I
grabbed a torch and the cadet along with the hard hat and raced to the admin
switchboard. When I got there one of the management was there looking what went
wrong - He told me the incoming supply had died; and told me where the
switchboard was that supplied it as I didnt know....
I was the second person to get to the room
- And the room was dark except for two tiny 12 volt 6 watt fluorescent lights
for a room over 10meters by 25 meters - The emergency lighting inverters were
on the sections work list - but they were not a real high priority as some
other work.... One of the assistant power plant operators (APPO's) tried to determine
what had happened. Luckily the protection technician cam in soon and we were
able to work it out. We worked out that the new ash disposal system had caused
the switchboard to trip. - But why?
Everyone decided to go to the control room
at this stage - which was packed. The manager was there as were a lot of extras
hoping that their training on the power plant simulator was affected by the
blackout they would play with the real thing. The auxiliary computer was out of
action due to the power failure - The main computer was running but not all information
went to it.
The computer was down cause the UPS had
failed (40 kWatt UPS running off 240Volt DC). At this stage the station is
still running at full load (650 MWatts). we then worked out what had happened.
The ash disposal system was very new and with
cables placed onto above ground cable trays to get the wiring to it. And these
are 3 * 2 inch 11KV cables for about a mile. It was supplied from the 1/2 and
3/4 end of the power station by separate cables. The 1/2 end was supplying the
current to the switchboard the people at the other end were wanting to work on.
So they opened the circuit breaker to the switchboard - leaving the breaker
from the 3/4 end able to be remotely closed at ANY time. Then they short circuited
the incoming supply on the LIVE side. This caused the switchboard to trip
itself to protect itself.
Then they started work. They didn't hear
the big bang which was produced when they closed the short.
Back in the control room people mentioned hearing
a bang, and us experts though the circuit board might have been damaged. To
protect anyone doing work - all work must be isolated including outgoing supplies
with a visible break that much be impossible to electronically bypass - and all
this work must be verified by someone before any work can be done, and each
person going into the area needs to be signed in and out; with everyone out of
the area before power can be applied.
There were problems opening up this
switchboard since we needed to isolate the circuit which had been short
circuited as no one wanted to rely on them to correctly do anything. And then
an alarm went off. One of the large motors (8 MWatt 47,000 KGram) tripped for
an unknown reason. The reason would have appeared on the computer which was out
of power.
No this motor was very important (Known as
an ID fan it sucked air out of the boiler so that power production went down to
400 MWatts meaning a large decrease in income ($30 / MWatt hour means that
about $10,000 is lost each hour that we are not operating). So there were now
two problems we had to face.
-
Could the motor be put back into service (it's
cost was about $10 million and the spare was off site anyway)
-
Could the switchboard be put back into service.
The technician and I decided to get into
the back of the 11 KV switchboard - against our safety rules - leaving the
cadet in the control room incase any one decided to do anything stupid, and so
our lack of presence would not be known so much.
On opening the switchboard we found no
problems which was very good - and we stayed away from the high voltage
conductors all the same. Still we needed to officially test the switchboard -
And it took 18 hours to get the board isolated so it could be tested and looked
at (Finding nothing). The decision having seen inside the switchboard was to
test it and put it back into service as soon as we could.
With the motor we decided that the computer
should be brought back on line first. When it came back we found that what had
caused the problem with the motor was an oxygen sensor that had last power when
it should not have. It told the computer that there was not enough oxygen in
the boiler, so it told the fan to change the angle of the blades on it's fan so
that it was not extracting the oxygen. This caused the motor to overheat and
the protection took it out.
We decided that since the motor was
probably un-damaged, and that no more damage would be done re-starting it we
may as well re-start it - which we did. That was ok.
By this time it was 4:00 in the afternoon
and time to go home. On the next day (Tuesday) after getting the switchboard
back in service after testing (except for the circuit to the ash system) it was
found that the cable to the ash system has shifted. The each of the 3
conductors were held together by clamps every 3 feet; and each cable was 2
inches in diameter. Between clamps the cables had pushed away from each other
so that in the middle they were about 2 inches apart. This for the mile of the transmission
line. During the short, the cable moved, hitting the cable trays causing a big BANG.
And you might think the story finished
there - On the Thursday whilst I was in the weekly team leaders meeting telling
the manager that things were back to normal there was another blackout -
although this was less serious. In this casse a link on a current transformer
was not closed correctly, causing the same switchboard on the 3/4 end of the
station to trip.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
These stations have over 60,000 drawings
and many more manuals; and are over 600 meters from one end to the other
without cooling towers (In other words the building only).
And finally the DC systems - 200,000 KGrams
of lead-acid batteries are used.
| | |