- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
I had to reread the last bit a few times to understand it...
It wasn't that James hadn't verified his ability to login to the server remotely.
He couldn't login because they had powered the server down, an hour earlier than he'd expected.
Admin
This happens because of how people are rewarded at companies. Usually managers get bonuses or good performance reviews because of getting stuff done on time without going over budget. Few people are rewarded for quality by itself. Managers may be concerned about quality, but not at the expense of their paychecks. Even when there's a recognition that quality is important, it can be difficult to provide the right incentive ("next quarter's goal is 5% more quality").
Admin
James may be criminally liable if he knows about this and does not report it. I suggest he look into it.
Admin
I don't think this is particularly unusual or even unethical. You have to have some criteria to decide how much to spend on safety. There are obvious extremes: should we spend $10 on a hardhat? Of course. Should we spend $10B on an exoskeleton if it will cut the risk by 0.0001%? Um, no. But in the middle, you need some rational way to decide how much your [employees'] safety is worth.
Admin
A related followup: the federal highway department assigns a number to human life. I don't know exactly what the number is but I seem to recall hearing about $2M. That means if they can spend less than $2M on a guardrail and it will save at least one life, they will. If saving one life will cost them more than $2M, then they don't do the project.
Admin
Give us code. like in the good old days (only 3 weeks ago) Or.. is their no WTF code available. Spread your wings, I am a professional Progress Programmer and I realy pitty all those so called c(#/+/++) programmers. F*cking idiots talking about ENTERPRICE. I am sorry.., you do not have the faintest clue about 4GL programming.
CAPTCHA; PROGRESS
Admin
I was plugging in a computer at a hospital OR and I noticed there were two different colors of outlets -- orange and grey. The grey ones are on regular city power and turn off when there's a blackout; the orange ones have backup power. Keep in mind this is an operating room. (What could you possibly have plugged in that you wouldn't need to keep working when the power goes out with a patient whose heart is possibly removed from his chest? It seems that surgeons like to listen to the radio while operating.)
So you'd figure the best thing to do is plug the computer into the orange outlet, right? No, I'm told that you only want to plug devices with battery backups into the orange outlets.
WTF? It turns out that the power to the orange outlets goes off for 10 minutes every month when they test the backup system. Any medical device critical to a patient's life is going to have a built-in battery so that patients can be moved about without having extension cords snaking all across the hospital.
Another WTF: At a small ISP we had servers with no UPS and no budget for one. At some point we acquired a small UPS in order to mitigate brief (i.e. a few seconds) power outages that sometimes happen during thunderstorms. I assumed that the proper thing to do was to wait until the middle of the night to bring down the servers and plug them into the UPS. Our sysadmin couldn't wait and decided to do it during the middle of the day. Unfortunately the little UPS couldn't handle the load and it took about an hour of downtime to decide that we were better off without it. In other words, the presence of the UPS caused an hour of prime-time downtime.
Admin
Anyway for every measure intended to keep system online you can imagine a scenario that leads to system shutdown. Bear with it. If you use more powerful UPS, automatic generator, solar panels, 3 redundant failover servers, it can be cloudy time outage may be so long that UPS and generator are out of power and an atom bomb exploded in the city so all those servers melted together. Make system working 100% of time is not a goal. Your goals are usually look different so you need considering risks and consequences. In this case clean shutdown is a big deal since 40 minutes outage will lead to 50 minutes of system offline not 8 hours. Some gas may leak during this time and some crews will be late an hour but things still may be amendable while 8 hours may lead to total disaster. And I do not think that if this system is offline 1 second every hour it would hurt much.
And so UPS instead of crying for help have to initiate proper sutdown procedure in advance (when there's enough power in UPS). That would be more sane solution if they do not want spending money.
Admin
This is pretty usual, especially in the gas business (I am working for one of the main independent oil and gas company). You have to assign a value to each asset when you do a quantitative risk assessment, and people are also an asset. However, this cost is not really a "cost of a human life", but rather a measure of how effective a safety measure must be to be actually implemented. If a safety measure costs more to implement than the cost of the human asset it's supposed to protect, then it means that safety measure is probably too ineffective to be implemented successfully anyway (typically such measures have high maintenance costs, which means they break often). Better keep the money for safety training or to enhance safety measures that *do* make a difference.
As the GP indicated, it's all about having a rational way to talk about risk and price of safety. It is *not* a hard limitation of the cost of safety measures.
Admin
"I realy pitty all those so called c(#/+/++) programmers"
C+?
And I would be surprised if no-one knew at least two 4GLs
Admin
Funny. The moment I was reading this post, power went down in the 28-story building I work in. UPS took over in a blink of an eye. I haven't even lost a single window on my screen...
Admin
I travel to work every day by public transport, this involves a very short connection at my nearby main-line station
1) A Local train for 2 stops - 7min wait
2) A National train for 2 stops - 10m wait
3) A free company shuttle bus
On the whole this is a reasonable journey and (against all expectations) takes about the same amount of time as driving and per week costs a lot less than the fuel to drive would.
So anyway, one of the guys on the train that I regularly chat to works for the local train company... He doesn't take the train every day as sometimes he needs to be in earlier than the trains run. But on the days when he's on a later shift he travels by train. However this local train company, in particular my route have a number of reliability issues.
If he is late for work as a result of (his company's) train service issues, he gets told off for being late. Apparently the official stance of the train company is that their staff should NOT rely on using the trains to get to work.... And yet in the UK that is exactly what the government would like the rest of us to do!! Thankfully I have a very understanding boss and flexitime, so on the regular (at least once per week) days this happens and I miss my connection it's not such an issue for me!
Admin
He then might get a snall UPS....
Admin
Of course I meant to type "small"...
Admin
The real WTF is that in America, "gas" is a liquid.
Admin
Either that or the server was drawing much more than 300W, the voltage dropped, and it took four seconds before the server really needed to draw more than 300W so it crashed.
Admin
All cable modems and external dial-up modems, that I know of, require external power.
Either use an internal dial-up modem in the laptop (increasingly more laptops lack this), or a mobile phone which can be used as a modem.
Admin
I have fight against some of my coworkers about what to connect to a UPS. My idea was to connect only the CPU box, and the others guy are defending to connect also the monitor. I think if energy go away, you can always tap "control-s", or ALT-F2 sudo shutdown now -h [enter] *********
One of my family menbers has a computer with a damaged monitor that often dont work. I can use that computer withouth the monitor to print stuff, y simply wait the computer run, then click WINKEY-R, wrote the url a webpage with a pdf (whwatever.com/upload/data.pdf). Then press [enter]. Wait that slow crap adobe reader, then press control-p and enter again.
You dont need the keyboard to tap your passwords. And often you dont need the monitor for trivial task, as save files, print files, logon, logout.
Is also posible to detect if your computer is virus infected, by the leds. If there more hard disk reading than normal, or disk reading where there no one need, you have a virus. Maybe I am a freak, but I dont need a fucking Antivirus If I can check the memory and check the leds often.
Admin
You mean the rumors are true? Some people are actually using Progress professionally? I guess that just lost me a bet with my fellow c+ programmers.
Admin
Are you serious or just being sarcastic?
Admin
Well, that is actually a sane policy.
I there are problems with the train service, the last thing you want is having the very same guys that are supposed to fix the problem stuck on the same train! Sort of a meat-space deadlock...
Admin
In fact, they tend to do. But the trouble with the backup sockets is that they go offline for testing. Thats a WTF in itself, in this case.
But this insanity reminds me of a story a friend told me after they had performed an audit on one of the larger nation-wide TV companies here. When they had the emergency test for the USP done by pulling the mains(wtf #1, imho), the USP (consisting of several tons of battery and diesel generators in the basement) almost instantly blew the its fuses.The nosing around showed that the special wall sockets in the offices were all occupied by coffee machines and microwave ovens or ACs. Keeping the computers running on a power out was pretty hard to grasp for most of the jornos, as "the viewers would not be able to watch TV anyway, without power"(sic). The USP would have been able to keep them all running and the emergency lights up, but the test was in summer time (ACs on) around 10:00 (coffee break). Oh, and since the main USP was redundant in itself and tested and maintained, no second USP was for "servers only" (wtf #2).
Admin
I don't know about where you live, but I rather hope my local hospital's operating theatres would be clean. That cleaning equipment doesn't need to be on a UPS, and I'd rather it wasn't.
Admin
Years ago I had a consulting gig in the safety control room of another natural gas pipeline. We were reworking their alarm monitoring system so it could run in parallel on redundant systems located in different cities, and so it would start up again in less than a minute if it shut down.
The guys who actually worked in the safety control room were experienced old-timers. One had visible burn scars on his face. They weren't fooling around; they wouldn't even let the programmers ask one of them questions unless there were three of them in the room.
Near the end of the project, the old and new systems were running in parallel, both driving the control room displays. The new systems had been installed with very solid UPS protection, backed up by good generators. My boss (a company employee programmer type) decided to test the new system by "injecting an alarm." She chose to emulate a fire detector alarm in a compressor station someplace in West Texas I think. We were in downtown Chicago.
But she forgot to tell the guys in the control room it was a test! :-) I have NEVER seen three men move so fast. By the time she said, "we're testing," two helicopters were in the air and two fire departments had been notified. They were quite happy after she convinced them it was a test, which took some doing.
So, not all natural gas pipeline outfits are staffed by bozos!
Admin
Give us a list.
Admin
Is my math wrong?
If the backup 300 watt UPS runs for 4 seconds, then the main, vital, big 1800 watt UPS will last for all of 24 seconds.
Admin
Pitty you. Our 100m USD runs entirely on Progress. Business has been booming in Holland and Progress never let us down.. Sorry for your lost bet. Having said that, the real WTF is that enterprice solutions are written in C in stead of using Progress. Programming in Progress 10 (open Edge) does not mean that the database should be Progress as well.
It seems your Progress knowledge is 'hear - say'. That's the problem with all C flavor programmers.
Admin
To add to the collective experiences...
A 2200VA UPS running a critical server made a sickening "Tweap!" and died a fraction of a second later when the mains went out one day. This UPS was connected to the server via serial link and the monitoring software on server said that the battery was fine. Pressing the "Test Battery" button on the front of the UPS also indicated things where fine. I won't mention the name of the UPS manufacturer, but its initials where APC.
Another critical server had dual power supplies. Each power supply was plugged into a different UPS. One day, some new equipment was being installed and the power cord for one of the power supplies was pulled out of its UPS. The server died. Turned out that for some configurations of this server, you needed all three power supplies installed and two of them powered up since one power supply couldn't run the server on its own. I won't mention the name of the server manufacturer, but its initials where IBM.
Admin
This happened to the company I worked for during 9/11. The hosting center we used is located 6 blocks from the WTC. The oil fired generators had 96 hiours of fuel, and because of the traffic to the disaster area they were not able to get a tanker into the hosting facility. Eventually it went down. The people manning that facility made heroic efforts under the most difficult conditions imaginable but there was nothing they could do.
This is why many businesses now have geographically dispersed backups.
Admin
Admin
No, the real WTF here is that you believe a compressed gas anywhere in the world other than America is not a liquid.
Admin
"Perhaps the laws of physics cease to exist on your stove."
-Vinny Gambini
Admin
Admin
Hmm, my LED blinks about once every 2 seconds at home. I've been trying to figure out what it is, but no luck. I don't think I have a virus, but I can't be sure. I've got Norton AV with up to date definitions. I suspect it's just something testing continually if something is in my DVD drive or not. I kill off services one by one and nothing seems to help.
Admin
Wait. I don't understand, why did someone need to log in remotely? Couldn't they simply have written a script that packs the DB and is called by the UPS when it senses that the power has gone out (and been out for x minutes if you want)? Every UPS I know of allows you to do this. WTF?
Admin
How is a discussion about your favorite programming language remotely relevant to the topic(s) at hand?
Admin
More likely, the maximum current draw of the UPS was exceeded and it shut down even though it had battery power left.
Admin
That's an interesting question, but Risk Manager find ways around answering it.
Backup Generators also have been known to fail as well. It happened to a TV station in San Francisco during a big blackout. It happened to the Los Angeles Fire Dept. Communication Center after the Northridge Earthquake, well the generator worked but it's cooling system gave out after a few hours
SCADA & computers work fine for pipelines if everyone knows how to use them right. There was a big double rupture of a pipeline in Virginia in 1980 because the Dispatchersaw a pressure spike coming down the pipeline,
but chose not to use the automated shutdown procudure. He instead opened up an automated valve to bleed off the pressure into a storage tank. But valves take a few minutes to fully open. Oops.
So what if a pipeline control center looses it's power during a pressure spike? That's when you'll hope you didn't buy a piece of $#%^ UPS!
Admin
I work as a mission-critical environment technician. The thing to realize is that there is far more to a backup generator system than just the generator. You need fuel storage and monitoring, fuel pumps, fuel cleaning, daytanks on the generators to supply fuel until the gen restores power for the main pumps, regular maintenance contracts for the generator and associated gear, load banks to test that the generator can support the specified load, etc.
Then you need all the associated switchgear (believe me, heavy duty 3-phase switchgear is VERY EXPENSIVE. You could easily buy a home for the cost of the gear you'll need), automatic controls to take the generator online and offline, syncing gear (if more than one gen), and you also need staff to regularly inspect and operate all the stuff.
"Adding a generator" to a mission critical system will cost something on the order of 1 million USD and will have a definite large cost associated with it for maintenance.
Admin
OK LOT'S of people here seem confused about power, measured in watts (= volts x amps), and energy, measured in (kilo)watt hours or joules, and charge, measured in amp-hours or coulombs.
Energy/power = time Power = voltage x current
If the UPS was rated as "300W", that tells you the power. Doesn't say anything about the energy, and the lifespan will depend on the power drawn. But plug a >>300W server into it and the 4 seconds described is probably the time it took for the internal components to fuse under the excess load.
Considerations for a UPS are both the power output (or current output, which if the voltage is fixed is the same), and the energy/charge stored. The first needs to be safely above the power drawn by whatever is being UPSed, the second should be long enough. Obviously it's not worth making an intranet server in an office run for days if the employees don't have power to their desktops.
CAPTCHA: java. JavaSCRIPT seems to be what is causing a lot of fuckups with this forum. I can't login with Konqueror :@
Admin
Don't most UPSs send a "I'm die-ing" message to the PC that they power, allowing the system to shut itself down cleanly? Or is this a UNIX thing?
Admin
this place "gets it"...
http://www.designbuild-network.com/projects/gchq/gchq1.html
the server hall is massive and two of those external "sheds" house the massive standby generators and chiller plant
Admin
Which leads us back to if what you say is true (very likely). When the power goes down again they will expect the UPS to save the day but it'll just make a dull tweet and die. Then they'll end up with another 8+ hour down time.... Luckily 3rd times a charm in most management and they'll want it solved at any cost this time.
captcha: error
Admin
Sometimes a UPS is not enough.
I worked at a major UK bank quite a few years ago. The building was designed over the top of two huge generators as well as having multiple external power feeds. They only ever had two power outages whilst I was there (12 years). One was where electrical contractors managed to cut through a live (450v ?) cable - after "checking that it was not live" oops!
But the other was a classic. Each "machine room" (this was a few years ago) had airlock like sets of double doors and to exit the room you trod on a large red floor button about 2 meters from the doors to get them to open. One Friday afternoon we had an engineer on site from an un-named company with initials IBM. When he went to leave the room he got to the doors and couldn't open them - so, being a bright spark, he looked around and saw a button on the wall - under a clear plastic flap. He lifted the flap and pushed the button. Said button had a large notice above it "EMERGENCY POWER OFF" - It's amazing how strange and unearthly it seems being in a completely silent room - Except for the manager escorting the engineer sobbing quietly!
Said engineer was invited to exit the building & not return!
Strangly a few years earlier I had, in the course of some horse play one night shift, knocked that self same button off the wall so it was just haning by it's cable. I can't recall the b*s@it that was used to try and excuse that - thank goodness it didn't shut the systems down or I might have been looking for a new job!
Admin
Many of them do, but sometimes the UPS batteries check out ok until they're under load.
While we're at it, it also pays to have a second set of backup data. Sounds simple, but I remember hearing about a big bank in the '80's losing account info when a mainframe hard drive died.
Admin
At one place I worked one of the maintenance guys dropped a piece of solder on a 480v bus. It was the flash seen aroung the fab!
Then, there was the semiconductor place I worked forsome years ago. One lot of 24 wafers cost about $100,000 to $200,000. When some processing machines there lost power the whole lot, or several lots, were ruined. So, after several internal & externallly caused power outages, a special UPS was installed for the most loss critical equipment. Well, it sounded good, but the first failure was caused by a ladder that slipped in the subfab that pulled some of the UPS cables for one area. Big hit in wafer loss dept.
Then, some months later, there was a big power phase disruption in the region. The power for the Fab disconnected from commercial power, but the UPS failed to kick in since it didn't care about the phase dip, it could just sense there was commercial power, so it didn't need to kick in. Even bigger hit in the wafer loss dept.
So, you can see my concerns about UPS systems & pipelines.