• Fuzzy Fox (unregistered) in reply to Anony Moose

    I had to reread the last bit a few times to understand it...

    It wasn't that James hadn't verified his ability to login to the server remotely.

    He couldn't login because they had powered the server down, an hour earlier than he'd expected.

  • (cs) in reply to cavemanf16
    cavemanf16:

    So, for instance, we are required to work projects and always get them done on time and around the original budget, but once that project is done we don't really have to justify any cost savings, additional revenue, or improved reliability in the system. So guess what suffers on projects? That's right, the quality.

    This happens because of how people are rewarded at companies.  Usually managers get bonuses or good performance reviews because of getting stuff done on time without going over budget.   Few people are rewarded for quality by itself.  Managers may be concerned about quality, but not at the expense of their paychecks.  Even when there's a recognition that quality is important, it can be difficult to provide the right incentive ("next quarter's goal is 5% more quality").

  • anon mouse (unregistered) in reply to Jer

    James may be criminally liable if he knows about this and does not report it. I suggest he look into it.

  • Scottford (unregistered)
    Anonymous:

    3. As part of the Lawsuit against BP for the big Texas City Refinery explosion in 2005 it was discovered that BP assigns cost to human lives lost.

    -Mike H. 

     I don't think this is particularly unusual or even unethical. You have to have some criteria to decide how much to spend on safety. There are obvious extremes: should we spend $10 on a hardhat? Of course. Should we spend $10B on an exoskeleton if it will cut the risk by 0.0001%? Um, no. But in the middle, you need some rational way to decide how much your [employees'] safety is worth.

     

  • Scottford (unregistered) in reply to Scottford
    Anonymous:

     I don't think this is particularly unusual or even unethical. You have to have some criteria to decide how much to spend on safety. There are obvious extremes: should we spend $10 on a hardhat? Of course. Should we spend $10B on an exoskeleton if it will cut the risk by 0.0001%? Um, no. But in the middle, you need some rational way to decide how much your [employees'] safety is worth.

     A related followup: the federal highway department assigns a number to human life. I don't know exactly what the number is but I seem to recall hearing about $2M. That means if they can spend less than $2M on a guardrail and it will save at least one life, they will. If saving one life will cost them more than $2M, then they don't do the project.
     

  • gjm (unregistered)

    Give us code. like in the good old days (only 3 weeks ago)  Or..  is their no WTF code available. Spread your wings, I am a professional Progress Programmer and I realy pitty all those so called c(#/+/++) programmers. F*cking idiots talking about ENTERPRICE. I am sorry.., you do not have the faintest clue about 4GL programming.

     

     CAPTCHA; PROGRESS

     

  • Gabe (unregistered)

    I was plugging in a computer at a hospital OR and I noticed there were two different colors of outlets -- orange and grey. The grey ones are on regular city power and turn off when there's a blackout; the orange ones have backup power. Keep in mind this is an operating room. (What could you possibly have plugged in that you wouldn't need to keep working when the power goes out with a patient whose heart is possibly removed from his chest? It seems that surgeons like to listen to the radio while operating.)

    So you'd figure the best thing to do is plug the computer into the orange outlet, right? No, I'm told that you only want to plug devices with battery backups into the orange outlets.

    WTF? It turns out that the power to the orange outlets goes off for 10 minutes every month when they test the backup system. Any medical device critical to a patient's life is going to have a built-in battery so that patients can be moved about without having extension cords snaking all across the hospital.

     

    Another WTF: At a small ISP we had servers with no UPS and no budget for one. At some point we acquired a small UPS in order to mitigate brief (i.e. a few seconds) power outages that sometimes happen during thunderstorms. I assumed that the proper thing to do was to wait until the middle of the night to bring down the servers and plug them into the UPS. Our sysadmin couldn't wait and decided to do it during the middle of the day. Unfortunately the little UPS couldn't handle the load and it took about an hour of downtime to decide that we were better off without it. In other words, the presence of the UPS caused an hour of prime-time downtime.

  • Petr Gladkikh (unregistered) in reply to yakko

    Anyway for every measure intended to keep system online you can imagine a scenario that leads to system shutdown. Bear with it. If you use more powerful UPS, automatic generator, solar panels, 3 redundant failover servers, it can be cloudy time outage may be so long that UPS and generator are out of power and an atom bomb exploded in the city so all those servers melted together. Make system working 100% of time is not a goal. Your goals are usually look different so you need considering risks and consequences. In this case clean shutdown is a big deal since 40 minutes outage will lead to 50 minutes of system offline not 8 hours. Some gas may leak during this time and some crews will be late an hour but things still may be amendable while 8 hours may lead to total disaster. And I do not think that if this system is offline 1 second every hour it would hurt much.

    And so UPS instead of crying for help have to initiate proper sutdown procedure in advance (when there's enough power in UPS). That would be more sane solution if they do not want spending money.

     

     

  • (cs) in reply to Scottford
    Anonymous:
    Anonymous:

     I don't think this is particularly unusual or even unethical. You have to have some criteria to decide how much to spend on safety. There are obvious extremes: should we spend $10 on a hardhat? Of course. Should we spend $10B on an exoskeleton if it will cut the risk by 0.0001%? Um, no. But in the middle, you need some rational way to decide how much your [employees'] safety is worth.

     A related followup: the federal highway department assigns a number to human life. I don't know exactly what the number is but I seem to recall hearing about $2M. That means if they can spend less than $2M on a guardrail and it will save at least one life, they will. If saving one life will cost them more than $2M, then they don't do the project.
     

    This is pretty usual, especially in the gas business (I am working for one of the main independent oil and gas company). You have to assign a value to each asset when you do a quantitative risk assessment, and people are also an asset. However, this cost is not really a "cost of a human life", but rather a measure of how effective a safety measure must be to be actually implemented. If a safety measure costs more to implement than the cost of the human asset it's supposed to protect, then it means that safety measure is probably too ineffective to be implemented successfully anyway (typically such measures have high maintenance costs, which means they break often). Better keep the money for safety training or to enhance safety measures that *do* make a difference.

    As the GP indicated, it's all about having a rational way to talk about risk and price of safety. It is *not* a hard limitation of the cost of safety measures.

  • [ICR] (unregistered) in reply to gjm

    "I realy pitty all those so called c(#/+/++) programmers"

    C+?

    And I would be surprised if no-one knew at least two 4GLs

  • QuoteReader (unregistered)

    Funny. The moment I was reading this post, power went down in the 28-story building I work in. UPS took over in a blink of an eye. I haven't even lost a single window on my screen...

  • sazoo (unregistered) in reply to tin

    I travel to work every day by public transport, this involves a very short connection at my nearby main-line station

    1) A Local train for 2 stops - 7min wait
    2) A National train for 2 stops - 10m wait
    3) A free company shuttle bus
    On the whole this is a reasonable journey and (against all expectations) takes about the same amount of time as driving and per week costs a lot less than the fuel to drive would.

     So anyway, one of the guys on the train that I regularly chat to works for the local train company... He doesn't take the train every day as sometimes he needs to be in earlier than the trains run.  But on the days when he's on a later shift he travels by train.  However this local train company, in particular my route have a number of reliability issues. 

    If he is late for work as a result of (his company's) train service issues, he gets told off for being late.  Apparently the official stance of the train company is that their staff should NOT rely on using the trains to get to work.... And yet in the UK that is exactly what the government would like the rest of us to do!!  Thankfully I have a very understanding boss and flexitime, so on the regular (at least once per week) days this happens and I miss my connection it's not such an issue for me!

  • Cope with IT (unregistered) in reply to Raymond Chen

    Raymond Chen:
    I still don't get it. The next time there's a power outage, the PAM stays up, the router stays up, but James can't sign on because he has no electricity!

    He then might get a snall UPS....

  • Cope with IT (unregistered) in reply to Cope with IT

    Of course I meant to type "small"...

  • (cs) in reply to BA
    Anonymous:
    marvin_rabbit:
    frosty:

    Anonymous:
    Sounds like they need a bigger gas tank for that UPS!

     What happens when all the gas stations are bone dry from everyone evacuating, and shipments in are delayed by blocked roads?  Do you call in a refueling helicopter or something? 

    An excellent point!  How will the detect gas leaks when there is no gas left!  

     ...  hey,  wait a minute .... 

    Natural gas, not gasoline.
     

    The real WTF is that in America, "gas" is a liquid. 

  • ambrosen (unregistered) in reply to Kay
    Anonymous:
    Anonymous:

    That's when he discovered, though, that a power-hungry server will run on a 300 watt UPS for all of 4 seconds before draining the battery. The users were not especially happy that afternoon.
     

    That's about 1200Ws or 1000As at 1.2V or 300mAh at 1.2V. Your UPS was either powered by a single, half drained AAA cell or broken to begin with. Or you might be talking out of your ass, I'm not completely sure.


    Either that or the server was drawing much more than 300W, the voltage dropped, and it took four seconds before the server really needed to draw more than 300W so it crashed.
  • Martijn (unregistered) in reply to ptomblin

    ptomblin:
    I have my cable modem and wireless router on my UPS just in case the cable company aren't complete fuck-ups and they manage to keep the cable modem going through a power outage.  And if they don't, there's always dial-up - at least the phone company usually keeps going through power outages.

    All cable modems and external dial-up modems, that I know of, require external power.

    Either use an internal dial-up modem in the laptop (increasingly more laptops lack this), or a mobile phone which can be used as a modem.

  • anonymous (unregistered) in reply to [ICR]

    I have fight against some of my coworkers about what to connect to a UPS. My idea was to connect only the CPU box, and the others guy are defending to connect also the monitor. I think if energy go away, you can always tap "control-s", or ALT-F2  sudo shutdown now -h [enter]  *********

    One of my family menbers has a computer with a damaged monitor that often dont work. I can use that computer withouth the monitor to print stuff, y simply wait the computer run, then click WINKEY-R, wrote the url a webpage with a pdf (whwatever.com/upload/data.pdf).  Then press [enter]. Wait that slow crap adobe reader, then press control-p and enter again.

    You dont need the keyboard to tap your passwords. And often you dont need the monitor for trivial task, as save files, print files, logon, logout.  

    Is also posible to detect if your computer is virus infected, by the leds. If there more hard disk reading than normal, or disk reading where there no one need, you have a virus. Maybe I am a freak, but I dont need a fucking Antivirus If I can check the memory and check the leds often. 

     

  • Martijn (unregistered) in reply to gjm

    Anonymous:
    I am a professional Progress Programmer and I realy pitty all those so called c(#/+/++) programmers.

    You mean the rumors are true? Some people are actually using Progress professionally? I guess that just lost me a bet with my fellow c+ programmers.

  • Martijn (unregistered) in reply to anonymous

    Anonymous:
    Is also posible to detect if your computer is virus infected, by the leds. If there more hard disk reading than normal, or disk reading where there no one need, you have a virus. Maybe I am a freak, but I dont need a fucking Antivirus If I can check the memory and check the leds often.

    Are you serious or just being sarcastic?

  • Guran (unregistered) in reply to sazoo

    Well, that is actually a sane policy.

    I there are problems with the train service, the last thing you want is having the very same guys that are supposed to fix the problem stuck on the same train! Sort of a meat-space deadlock...

  • Redshirt Coder (unregistered) in reply to Gabe
    Anonymous:

    I was plugging in a computer at a hospital OR and I noticed there were two different colors of outlets -- orange and grey. The grey ones are on regular city power and turn off when there's a blackout; the orange ones have backup power. Keep in mind this is an operating room. (What could you possibly have plugged in that you wouldn't need to keep working when the power goes out with a patient whose heart is possibly removed from his chest? It seems that surgeons like to listen to the radio while operating.)

    In fact, they tend to do. But the trouble with the backup sockets is that they go offline for testing. Thats a WTF in itself, in this case.

    But this insanity reminds me of a story a friend told me after they had performed an audit on one of the larger nation-wide TV companies here. When they had the emergency test for the USP done by pulling the mains(wtf #1, imho), the USP (consisting of several tons of battery and diesel generators in the basement) almost instantly blew the its fuses.The nosing around showed that the special wall sockets in the offices were all occupied by coffee machines and microwave ovens or ACs. Keeping the computers running on a power out was pretty hard to grasp for most of the jornos, as "the viewers would not be able to watch TV anyway, without power"(sic). The USP would have been able to keep them all running and the emergency lights up, but the test was in summer time (ACs on) around 10:00 (coffee break). Oh, and since the main USP was redundant in itself and tested and maintained, no second USP was for "servers only" (wtf #2).

  • (cs) in reply to Gabe
    Anonymous:
    I was plugging in a computer at a hospital OR and I noticed there were two different colors of outlets -- orange and grey. The grey ones are on regular city power and turn off when there's a blackout; the orange ones have backup power. Keep in mind this is an operating room. (What could you possibly have plugged in that you wouldn't need to keep working when the power goes out with a patient whose heart is possibly removed from his chest? It seems that surgeons like to listen to the radio while operating.)


    I don't know about where you live, but I rather hope my local hospital's operating theatres would be clean. That cleaning equipment doesn't need to be on a UPS, and I'd rather it wasn't.
  • (cs) in reply to Bellinghman

    Years ago I had a consulting gig in the safety control room of another natural gas pipeline.  We were reworking their alarm monitoring system so it could run in parallel on redundant systems located in different cities, and so it would start up again in less than a minute if it shut down.

     The guys who actually worked in the safety control room were experienced old-timers.  One had visible burn scars on his face.  They weren't fooling around; they wouldn't even let the programmers ask one of them questions unless there were three of them in the room.

     Near the end of the project, the old and new systems were running in parallel, both driving the control room displays. The new systems had been installed with very solid UPS protection, backed up by good generators. My boss (a company employee programmer type) decided to test the new system by "injecting an alarm."  She chose to emulate a fire detector alarm in a compressor station someplace in West Texas I think.  We were in downtown Chicago.

    But she forgot to tell the guys in the control room it was a test! :-) I have NEVER seen three men move so fast. By the time she said, "we're testing," two helicopters were in the air and two fire departments had been notified.  They were quite happy after she convinced them it was a test, which took some doing.

    So, not all natural gas pipeline outfits are staffed by bozos!
     

  • gjm (unregistered) in reply to [ICR]

    Give us a list.

  • Andy (unregistered) in reply to Jay

    Is my math wrong?

    If the backup 300 watt UPS runs for 4 seconds, then the main, vital, big 1800 watt UPS will last for all of 24 seconds.

  • gjm (unregistered) in reply to Martijn
    Anonymous:

    Anonymous:
    I am a professional Progress Programmer and I realy pitty all those so called c(#/+/++) programmers.

    You mean the rumors are true? Some people are actually using Progress professionally? I guess that just lost me a bet with my fellow c+ programmers.

    Pitty you. Our 100m USD runs entirely on Progress. Business has been booming in Holland and Progress never let us down.. Sorry for your lost bet. Having said that, the real WTF is that enterprice solutions are written in C in stead of using Progress. Programming in Progress 10 (open Edge) does not mean that the database should be Progress as well.

    It seems your Progress knowledge is 'hear - say'. That's the problem with all C flavor programmers.  

      

     

  • (cs)

    To add to the collective experiences...

    A 2200VA UPS running a critical server made a sickening "Tweap!" and died a fraction of a second later when the mains went out one day. This UPS was connected to the server via serial link and the monitoring software on server said that the battery was fine. Pressing the "Test Battery" button on the front of the UPS also indicated things where fine. I won't mention the name of the UPS manufacturer, but its initials where APC.

    Another critical server had dual power supplies. Each power supply was plugged into a different UPS. One day, some new equipment was being installed and the power cord for one of the power supplies was pulled out of its UPS. The server died. Turned out that for some configurations of this server, you needed all three power supplies installed and two of them powered up since one power supply couldn't run the server on its own. I won't mention the name of the server manufacturer, but its initials where IBM.

  • Harry (unregistered) in reply to frosty
    frosty:

    Anonymous:
    Sounds like they need a bigger gas tank for that UPS!

     

    What happens when all the gas stations are bone dry from everyone evacuating, and shipments in are delayed by blocked roads?  Do you call in a refueling helicopter or something? 

     
    This happened to the company I worked for during 9/11. The hosting center we used is located 6 blocks from the WTC. The oil fired generators had 96 hiours of fuel, and because of the traffic to the disaster area they were not able to get a tanker into the hosting facility. Eventually it went down. The people manning that facility made heroic efforts under the most difficult conditions imaginable but there was nothing they could do.

     

    This is why many businesses now have geographically dispersed backups. 

     

  • (cs) in reply to Andy
    Anonymous:

    Is my math wrong?

    If the backup 300 watt UPS runs for 4 seconds, then the main, vital, big 1800 watt UPS will last for all of 24 seconds.

    No, "Kay"'s physics was wrong. If the UPS can support a load of 300W, and a 400W load is attached, then it's going to be like a brown out. The server may run on its own power supply's capacitance for a bit, and then die. Put the same load on the 1800W UPS, and it should keep the voltage high enough to keep the server until the battery is drained.
  • Anonymous (unregistered) in reply to Iago
    Iago:

    The real WTF is that in America, "gas" is a liquid. 

    No, the real WTF here is that you believe a compressed gas anywhere in the world other than America is not a liquid. 

  • Anonymous (unregistered) in reply to Anonymous
    Anonymous:

    No, the real WTF here is that you believe a compressed gas anywhere in the world other than America is not a liquid. 

    "Perhaps the laws of physics cease to exist on your stove."
    -Vinny Gambini

  • (cs) in reply to gjm
    Anonymous:

    Give us code. like in the good old days (only 3 weeks ago)  Or..  is their no WTF code available. Spread your wings, ...

    <font size="+1">L</font>ook for articles that have a "[CodeSOD]" in the title.  These are code related WTFs.

  • (cs) in reply to anonymous
    Anonymous:

    Is also posible to detect if your computer is virus infected, by the leds. If there more hard disk reading than normal, or disk reading where there no one need, you have a virus. Maybe I am a freak, but I dont need a fucking Antivirus If I can check the memory and check the leds often. 

     

    Hmm, my LED blinks about once every 2 seconds at home.  I've been trying to figure out what it is, but no luck.  I don't think I have a virus, but I can't be sure.  I've got Norton AV with up to date definitions.  I suspect it's just something testing continually if something is in my DVD drive or not.   I kill off services one by one and nothing seems to help.

  • Anon (unregistered)

    Wait. I don't understand, why did someone need to log in remotely? Couldn't they simply have written a script that packs the DB and is called by the UPS when it senses that the power has gone out (and been out for x minutes if you want)? Every UPS I know of allows you to do this. WTF?

  • YodaYid (unregistered) in reply to gjm

    How is a discussion about your favorite programming language remotely relevant to the topic(s) at hand?

    gjm:
    Anonymous:

    Anonymous:
    I am a professional Progress Programmer and I realy pitty all those so called c(#/+/++) programmers.

    You mean the rumors are true? Some people are actually using Progress professionally? I guess that just lost me a bet with my fellow c+ programmers.

    Pitty you. Our 100m USD runs entirely on Progress. Business has been booming in Holland and Progress never let us down.. Sorry for your lost bet. Having said that, the real WTF is that enterprice solutions are written in C in stead of using Progress. Programming in Progress 10 (open Edge) does not mean that the database should be Progress as well.

    It seems your Progress knowledge is 'hear - say'. That's the problem with all C flavor programmers.  

      

     

  • Loren Pechtel (unregistered) in reply to Kay
    Anonymous:
    Anonymous:

    That's when he discovered, though, that a power-hungry server will run on a 300 watt UPS for all of 4 seconds before draining the battery. The users were not especially happy that afternoon.
     

    That's about 1200Ws or 1000As at 1.2V or 300mAh at 1.2V. Your UPS was either powered by a single, half drained AAA cell or broken to begin with. Or you might be talking out of your ass, I'm not completely sure.

     
    More likely, the maximum current draw of the UPS was exceeded and it shut down even though it had battery power left.

  • Mike (unregistered) in reply to Dazed
    Anonymous:
    Erick:
    Millions of dollars in fines plus the risk of being shut out of business vs. the purchase of a single generator.

    Talk about stubborn.

    More to the point: what about the risk of people being killed?

    Appalling.

    That's an interesting question, but Risk Manager find ways around answering it. 

    Backup Generators also have been known to fail as well. It happened to a TV station in San Francisco during a big blackout. It happened to the Los Angeles Fire Dept. Communication Center after the Northridge Earthquake, well the generator worked but it's cooling system gave out after a few hours

    SCADA & computers work fine for pipelines if everyone knows how to use them right. There was a big double rupture of a pipeline in Virginia in 1980 because the Dispatchersaw a pressure spike coming down the pipeline,
    but chose not to use the automated shutdown procudure. He instead opened up an automated valve to bleed off the pressure into a storage tank. But valves take a few minutes to fully open. Oops.

    So what if a pipeline control center looses it's power during a pressure spike? That's when you'll hope you didn't buy a piece of $#%^ UPS!

  • guest (unregistered)

    I work as a mission-critical environment technician.  The thing to realize is that there is far more to a backup generator system than just the generator.  You need fuel storage and monitoring, fuel pumps, fuel cleaning, daytanks on the generators to supply fuel until the gen restores power for the main pumps, regular maintenance contracts for the generator and associated gear, load banks to test that the generator can support the specified load, etc.

     Then you need all the associated switchgear (believe me, heavy duty 3-phase switchgear is VERY EXPENSIVE.  You could easily buy a home for the cost of the gear you'll need), automatic controls to take the generator online and offline, syncing gear (if more than one gen), and you also need staff to regularly inspect and operate all the stuff.

    "Adding a generator" to a mission critical system will cost something on the order of 1 million USD and will have a definite large cost associated with it for maintenance.

  • m0ffx (unregistered) in reply to Andy

    OK LOT'S of people here seem confused about power, measured in watts (= volts x amps), and energy, measured in (kilo)watt hours or joules, and charge, measured in amp-hours or coulombs.

    Energy/power = time Power = voltage x current

    If the UPS was rated as "300W", that tells you the power. Doesn't say anything about the energy, and the lifespan will depend on the power drawn. But plug a >>300W server into it and the 4 seconds described is probably the time it took for the internal components to fuse under the excess load.

    Considerations for a UPS are both the power output (or current output, which if the voltage is fixed is the same), and the energy/charge stored. The first needs to be safely above the power drawn by whatever is being UPSed, the second should be long enough. Obviously it's not worth making an intranet server in an office run for days if the employees don't have power to their desktops.

    CAPTCHA: java. JavaSCRIPT seems to be what is causing a lot of fuckups with this forum. I can't login with Konqueror :@

  • Richard (unregistered) in reply to anonymouse

    Don't most UPSs send a "I'm die-ing" message to the PC that they power, allowing the system to shut itself down cleanly? Or is this a UNIX thing?

  • annon (unregistered) in reply to Local view
    Anonymous:
    frosty:

    Anonymous:
    Sounds like they need a bigger gas tank for that UPS!

    What happens when all the gas stations are bone dry from everyone evacuating, and shipments in are delayed by blocked roads?  Do you call in a refueling helicopter or something? 

     Something like that. About 10 years ago, I worked for a company that got it. The whole building was on a UPS for surge protection. They had 3 very large turbines as a triple backup, a 30,000 gallon tank to keep them humming, standing contracts with multiple geographically disperse fuel suppliers, and monthly drills to fire the whole thing up for 10 minutes, and then switch back - live - midday. Except for the clouds of black smoke billowing out of the exhaust stacks, you couldn't even tell that the switch had occurred. All of this was at both the primary AND disaster sites. It was the one thing to which nobody ever said no (not the business, not the executives - it was $whatever-you-need - *very* rare!)



    this place "gets it"...

    http://www.designbuild-network.com/projects/gchq/gchq1.html

    the server hall is massive and two of those external "sheds" house the massive standby generators and chiller plant

  • Coyote (unregistered) in reply to Loren Pechtel

    Which leads us back to if what you say is true (very likely). When the power goes down again they will expect the UPS to save the day but it'll just make a dull tweet and die. Then they'll end up with another 8+ hour down time.... Luckily 3rd times a charm in most management and they'll want it solved at any cost this time.

     

    captcha: error 

  • (cs) in reply to Coyote

    Sometimes a UPS is not enough.

    I worked at a major UK bank quite a few years ago. The building was designed over the top of two huge generators as well as having multiple external power feeds. They only ever had two power outages whilst I was there (12 years). One was where electrical contractors managed to cut through a live (450v ?) cable - after "checking that it was not live"  oops!

    But the other was a classic. Each "machine room" (this was a few years ago) had airlock like sets of double doors and to exit the room you trod on a large red floor button about 2 meters from the doors to get them to open. One Friday afternoon we had an engineer on site from an un-named company with initials IBM.  When he went to leave the room he got to the doors and couldn't open them - so, being a bright spark, he looked around and saw a button on the wall - under a clear plastic flap. He lifted the flap and pushed the button. Said button had a large notice above it "EMERGENCY POWER OFF" - It's amazing how strange and unearthly it seems being in a completely silent room - Except for the manager escorting the engineer sobbing quietly!

     Said engineer was invited to exit the building & not return! 

    Strangly a few years earlier I had, in the course of some horse play one night shift, knocked that self same button off the wall so it was just haning by it's cable. I can't recall the b*s@it that was used to try and excuse that - thank goodness it didn't shut the systems down or I might have been looking for a new job!

     

     

     

     

  • Mike (unregistered) in reply to Richard

    Anonymous:
    Don't most UPSs send a "I'm die-ing" message to the PC that they power, allowing the system to shut itself down cleanly? Or is this a UNIX thing?

    Many of them do, but sometimes the UPS batteries check out ok until they're under load.

    While we're at it, it also pays to have a second set of backup data. Sounds simple, but I remember hearing about a big bank in the '80's losing account info when a mainframe hard drive died.

  • Mike (unregistered) in reply to Hamish
    Hamish:

    I worked at a major UK bank quite a few years ago. The building was designed over the top of two huge generators as well as having multiple external power feeds. They only ever had two power outages whilst I was there (12 years). One was where electrical contractors managed to cut through a live (450v ?) cable - after "checking that it was not live"  oops!

    At one place I worked one of the maintenance guys dropped a piece of solder on a 480v bus. It was the flash seen aroung the fab!

    Then, there was the semiconductor place I worked forsome years ago. One lot of 24 wafers cost about $100,000 to $200,000. When some processing machines there lost power the whole lot, or several lots, were ruined. So, after several internal & externallly caused power outages, a special UPS was installed for the most loss critical equipment. Well, it sounded good, but the first failure was caused by a ladder that slipped in the subfab that pulled some of the UPS cables for one area. Big hit in wafer loss dept.

    Then, some months later, there was a big power phase disruption in the region. The power for the Fab disconnected from commercial power, but the UPS failed to kick in since it didn't care about the phase dip, it could just sense there was commercial power, so it didn't need to kick in. Even bigger hit in the wafer loss dept.

    So, you can see my concerns about UPS systems & pipelines. 

Leave a comment on “A UPS Should Be Fine”

Log In or post as a guest

Replying to comment #:

« Return to Article