• Lerch98 (unregistered) in reply to MiniMax

    I named test rigs after ex-wifes and girlfriends. People here at work asked me where I got all those names, and I told them. It was big joke at work here and now I am the official name maker.

    captcha: genitus. Lerch the super genitus

  • Don L (unregistered) in reply to C-Derb
    C-Derb:
    ....that gives developers a heads as to *why* stuff is going to break, and now is a good time to take a break and read TDWTF.com.

    No real developers read TDWTF.

  • Someone (unregistered) in reply to Roy
    MiniMax:
    Servers being used or staging are production servers - for QA.
    Good point.

    However, "servers in production for QA" is a bit of a long name. I suggest we abbreviate it to "staging servers". Sound good?

    And similar for the others.

    Roy:
    If the PM responsible wasn't able to secure this, it can't have been as business critical as he thought he was.
    Or would have been able to secure that, but didn't want to admit to his/her higher-ups that they needed to.
  • (cs) in reply to MiniMax
    MiniMax:
    Nagesh:
    In our client location, we cannot send anything to staging. there are 3 servers named Monika, Phibi and Rachel.

    Monika is development. Phibi is test (UAT) Rachel is production.

    I can only put stuff on Monika.

    A sysadmin I had the dubious honor of working with, named all his little utility-scripts with dog names....
    One of my customers had workstations named after Star Trek characters. As they grew they were getting desperate for names (but not too desperate - there was no "Tribble" yet) and were saved by The Next Generation.

  • Bill C. (unregistered) in reply to Nagesh
    Nagesh:
    I can only put stuff on Monika.
    Them's fighting words bub.
  • Chelloveck (unregistered) in reply to Ozz
    Ozz:
    I knew a SysAdmin who named his servers after supermodels. That resulted in some interesting comments like "Heidi Klum went down on me last night."

    Our machines are named after movies. Batman, Robocop, Sneakers, Roxanne... The latter, of course, also being my wife's name. Yeah, it was funny the first couple times when someone told me that Roxanne went down on them again...

    captcha: populus "She's one populus woman!"

  • (cs) in reply to golddog
    golddog:
    Seems to me the real WTF is TTP's PM promising a go-live of April 1st, and not "x nmber of days after Development and UAT are finished."

    When there's a dependency, set expectations based on that dependency.

    My bet is TTP's PM reasoned "X days for development, plus Y days for integration and UAT, plus Z days to get through the promotion paperwork equals April 1," and his team missed dates left and right because he vastly underestimated.

    But I'm going to have to echo the others who said TRWTF is production clients being able to access staging.

  • (cs)

    The best was a company that used Star Wars planets for servers.

    "It looks like Alderaan blew up."

    Too soon?

  • (cs)

    This whole WTF calls for a little BOFH action.

    Now where is the elevator shaft? BZZZERT!!

  • (cs) in reply to Tuxie
    Tuxie:
    This could be a tale from almost any IT workplace in the world.

    Here's my version:

    We had a PM in one department have IT create servers for the application. They had the infrastructure guy do everything, including setting up the application. When he went to do routine "maintenance" on what was pretty much the staging server, pretty much the same thing happened. The end users had been using the app on the staging server as the production server. Pretty funny that it was the same guy who had set up the servers and had rebooted the stage-duction server.

  • urza9814 (unregistered) in reply to jarfil
    jarfil:
    TRWTF is staging servers not getting wiped regularly.

    "You deployed your app on a staging server? Yeah, well, tough luck, they get wiped every morning."

    Also rebooting of staging servers should be automated and just a click away for any developer, no need to call IT.

    "Your apps keep getting rebooted? On a staging server? Go figure."

    And what is that preposterous idea of anyone "objecting" before patching/upgrading the staging servers? If stuff breaks, it breaks, that's what they are for.

    Captcha: persto. Got problems with your staging server? rm -rf /. Presto.

    ...Thus guaranteeing that your developers are going to frequently spend hours getting data set up properly to investigate a defect only to have that data wiped by someone else before they can finish.

    Anything you're doing to any server used by multiple people needs to be scheduled with notification given in advance for any changes whenever possible. Your plan might work on a team of five devs, but try it with five hundred and nobody's gonna be able to get a damn thing done because they'll all be stepping all over each others' work!

  • (cs) in reply to Nagesh
    Nagesh:
    In our client location, we cannot send anything to staging. there are 3 servers named Monika, Phibi and Rachel.

    Monika is development. Phibi is test (UAT) Rachel is production.

    I can only put stuff on Monika.

    And I'd always thought the ditsy one was spelt Phoebe. Blame my classical education. I don't watch this shit but my wife does and I can't escape the bunch of fuckwits.

  • (cs)

    'We' had a server named Woodchuck...

  • Bene (unregistered)

    I would like to line up everyone who names their servers after cars, planets, Disney characters, ex-gfs, and emoticons. For the love of God, think of the sysadmins. This naming convention will not scale and is a nightmare to maintain when your company gets big.

    Please, think of the sys-admins. Do not name your servers like this.

    Captcha: bene. My 2nd World of Warcraft character....

  • (cs)

    We never point our (all internal) customers to pages hosted on our staging webserver. Never, ever. All good, right?

    Except ... we have some pages that aren't exposed to customers, that run scheduled processes (updating files on the network, sending e-mails, etc.). We never put those on the production server, since they could conceivably be exposed to customers there. We design them on development and test them on staging, then schedule the run to look for the copy on the staging server.

    So--yesterday, our development and staging environments got wiped out. (Somehow--my manager said only that 'the responsible party confessed'.) We were scrambling for quite a while before the infrastructure team got those restored from backup.

    And mere hours later, I'm told that I'm to be the new supervisor of that intranet development team. Any ideas for some initial decrees? :)

  • hmmm (unregistered) in reply to Nagesh
    Nagesh:
    In our client location, we cannot send anything to staging. there are 3 servers named Monika, Phibi and Rachel.

    Monika is development. Phibi is test (UAT) Rachel is production.

    I can only put stuff on Monika.

    Poor Monika. Everyone puts stuff on Monika - and it's rarely her fault.

  • Jorg (unregistered) in reply to operagost
    operagost:
    Eric:
    Nagesh, your naming standard is bad and you should feel bad. Unless Monika makes great cake, Phibi is stupid and Rachel has great tits. Then you're spot on.
    No, because two out of three are misspelled.
    3
  • Mickey (unregistered) in reply to Bracket
    Bracket:
    "TTP" stands for "The TTP Project".
    I was expecting a punchline that someone had lost the H of a url.... ttp://google.com
  • (cs) in reply to SCSimmons
    SCSimmons:
    We never point our (all internal) customers to pages hosted on our staging webserver. Never, ever. All good, right?

    Except ... we have some pages that aren't exposed to customers, that run scheduled processes (updating files on the network, sending e-mails, etc.). We never put those on the production server, since they could conceivably be exposed to customers there. We design them on development and test them on staging, then schedule the run to look for the copy on the staging server.

    So--yesterday, our development and staging environments got wiped out. (Somehow--my manager said only that 'the responsible party confessed'.) We were scrambling for quite a while before the infrastructure team got those restored from backup.

    And mere hours later, I'm told that I'm to be the new supervisor of that intranet development team. Any ideas for some initial decrees? :)

    Non-public facing production server......it's a thing.

  • ItWuzAlwaysAboutDahNumbaz (unregistered) in reply to eViLegion
    eViLegion:
    If it turned out that the guy then murdered a high ranking politician, I'm pretty sure I'd be in the clear.

    What if the high ranking politician was the vice president? Or worse still, a closely related family member...?

  • Dodgy Mike (unregistered) in reply to Tuxie
    Tuxie:
    This could be a tale from almost any IT workplace in the world.
    Absolutely right. This happened to me. Proof-of-concept with a few users was so good it became production without anyone telling us. The corporate end decided that it would be rolled out as is and that was it. Went from a few users to hundreds in no time.
  • Cheong (unregistered) in reply to Androtheos
    Androtheos:
    I have a staging area, it's called my development machine :-)
    In one of my ex-companies, I have had a dozen of production applications run on my desktop... This is hardly surprising.
  • vladimir (unregistered) in reply to Smug Unix User
    Smug Unix User:
    This is why we can't have nice things. This is also the reason that staging servers get locked down as tightly as production. This is is also the reason that getting an application through the process has more layers than an onion. When people don't follow the rules, more rules typically get added as a knee jerk response to the "how did this happen". It appears the company is about midway in the life cycle.

    Stage one of the life cycle it is the wild west and anything goes anywhere. No sign off, no problem. As the company matures they realize that they need things like change management, source control and other industry standards.

    Standards begin small in the middle stage. Things can get done at this stage but the users who had grown to appreciate the speed at which things were done in the beginning stage lose faith in the system and begin to circumvent it. This triggers the final stage.

    Final stage and nothing can be touched without three signatures. Duties are separated to prevent any one area from having control or authority. Red tape is the final product. At this stage progress grinds to a near standstill.

    Bonus stage. Rise of the contractors. Since nothing can be done in the previous stage contract work begins. Work is farmed out to a company in stage 1 or stage 2. The original host company stays alive until they acquire their contract companies.

    Yup so true, company I am working for is currently in stage 2. When we were in Stage 1 we were developing on production, very little backups. Dev server? Staging server? HAHAHAHA

    Then our server got hacked and completely wiped. By some fluke we did a full DVD backup the day before.. although of course backups weren't tested and the last 2 dvds were corrupted.. Luckily our mission critical data was alright, we just lost some minor stuff that we recovered from.

    So then over the next 2 years we implemented proper dev server, staging server, backup system, security, source control. We are still lacking unit testing (probably implementing in next 6 months), and a proper change request/bug tracking. We still occasionally cheat such as quickly promoting to production with little testing.

    I don't see the bonus stage happening to us

  • vladimir (unregistered)

    I see stage 3 happening in about a year or so

  • Friedrice The Great (unregistered) in reply to Don L
    Don L:
    C-Derb:
    ....that gives developers a heads as to *why* stuff is going to break, and now is a good time to take a break and read TDWTF.com.

    No real developers read TDWTF.

    They apparently didn't work on Community Server, either.
  • (cs) in reply to Smug Unix User
    Smug Unix User:
    This is why we can't have nice things. This is also the reason that staging servers get locked down as tightly as production. This is is also the reason that getting an application through the process has more layers than an onion. When people don't follow the rules, more rules typically get added as a knee jerk response to the "how did this happen". It appears the company is about midway in the life cycle.

    Stage one of the life cycle it is the wild west and anything goes anywhere. No sign off, no problem. As the company matures they realize that they need things like change management, source control and other industry standards.

    Standards begin small in the middle stage. Things can get done at this stage but the users who had grown to appreciate the speed at which things were done in the beginning stage lose faith in the system and begin to circumvent it. This triggers the final stage.

    Final stage and nothing can be touched without three signatures. Duties are separated to prevent any one area from having control or authority. Red tape is the final product. At this stage progress grinds to a near standstill.

    Bonus stage. Rise of the contractors. Since nothing can be done in the previous stage contract work begins. Work is farmed out to a company in stage 1 or stage 2. The original host company stays alive until they acquire their contract companies.

    "Standards begin small in the middle stage. Things can get done at this stage but the users who had grown to appreciate the speed at which things were done in the beginning stage lose faith in the system and begin to circumvent it. This triggers the final stage."

    No, this triggers the stage where you start getting rid of all those stupid prima donnas who think they're too important for process. They have a personality defect which makes them incompatible with working in a team. They are toxic.

  • (cs) in reply to Eric
    Eric:
    Unless Monika makes great cake
    Clearly she should have been named GLaDOS.
  • Nobody (unregistered) in reply to eViLegion
    eViLegion:
    SirCompo:
    It's the gimp in the IT Helpdesk that bypassed the usual process that should be held accountable for this.

    Eh? Whats it gotta do with the IT Helpdesk guy?

    Someone rang him up, and asked him if he gave them permission to do something. Since it probably wasn't the helpdesk's jurisdiction, why shouldn't the guy there just shrug and say "yeah... I don't have a problem with that mate... knock yourself out."

    If some random cunt phoned me up and asked if I was happy for him to do something that was absolutely nothing to do with me, I'd just say "yes" in a slightly confused tone, then hang up the phone. If it turned out that the guy then murdered a high ranking politician, I'm pretty sure I'd be in the clear.

    And that's how confusion happens and how you risk getting blamed for things. You shouldn't give permission for things over which you have no authority. You should say "I don't have any control over that."

  • Spencer (unregistered) in reply to ItWuzAlwaysAboutDahNumbaz
    ItWuzAlwaysAboutDahNumbaz:
    eViLegion:
    If it turned out that the guy then murdered a high ranking politician, I'm pretty sure I'd be in the clear.

    What if the high ranking politician was the vice president? Or worse still, a closely related family member...?

    A plot against the president's daughter?

  • anon (unregistered) in reply to underling

    That was my first thought. Still, your boss is what you make him/her - it's also a part of the process ;)

  • (cs) in reply to Steve The Cynic
    Steve The Cynic:
    Reading comprehension failure.

    Or lack of coffee. But yeah, thanks for the explanation (and English is not my native tongue).

    A few years back I worked at a place where there were different servers for different purposes: Front-end (web based), middleware, back-end (with business logic). The problem here was that you couldn't always be sure to hit the right combination. Most of these servers had various purposes (dev, various testing/staging, production) but sometimes the specific test front-end hit the wrong back-end database. Testers could spend hours trying to find out what happened to their test data.

    Fortunately the production servers never had that problem (as far as I know) :-)

  • Mike Dimmick (unregistered) in reply to MiniMax
    MiniMax:
    The WTF is that there no such thing as development servers, testing servers or staging servers.

    They are all PRODUCTION servers - for different groups.

    Servers being used by developers are production servers - for the developers.

    Servers being used for testing are production servers - for developers (and may QA).

    Servers being used or staging are production servers - for QA.

    If not, then a lot of people are not being productive and should be fired.

    True: but do your staging servers really need the full (expensive) high-availability, backup and replication hardware and software licences? Can you get away with slightly overcommitting resources on your virtual machine hosts, with the understanding that you're not actually going to load-test all of your applications simultaneously?

    If your software does actually integrate with your platform's clustering/high-availability solution, and you need to accurately test that it fails over correctly, then yes, QA might need identical hardware. Otherwise, perhaps not.

    The answers to those questions determine whether your staging environment is completely identical to production, or whether you might save some money by getting near enough to it. You might find it difficult to get authorization to spend money on systems that will be idle a lot of the time. The first time you'll get the authorization is when something doesn't actually scale or fail over properly when put into full production.

  • katastrofa (unregistered) in reply to golddog
    golddog:
    Seems to me the real WTF is TTP's PM promising a go-live of April 1st, and not "x nmber of days after Development and UAT are finished."

    When there's a dependency, set expectations based on that dependency.

    Right, but the clients want a final date. They don't give a cat's whiskers about your dependencies. It's your job to sort them out.

  • (cs) in reply to Lerch98
    Lerch98:
    I named test rigs after ex-wifes and girlfriends. People here at work asked me where I got all those names, and I told them. It was big joke at work here and now I am the official name maker.

    captcha: genitus. Lerch the super genitus

    another client i work with having HINDU gods for server names.

    vishnu / shiva / bramha / indra / agni / varun / vayu and then he commit silly mistake of name one server after demon king ravana. people best study stuff before naming servers.

  • Guran (unregistered) in reply to Some Damn Yank
    Some Damn Yank:
    Guran:
    TRWTF here is production clients accessing a staging server. Few things can cause more badness than staging machines communicating with the outside world.
    True enough, but I've got a real WTF for you: staging machines that run Windows when the production machines run Linux. No shit. And everyone wondered why the web site didn't run right in production after it tested so well in development.

    Happened to me too...

    But the real fun begins when someone does a test run with production data on a not-properly-sandboxed staging server. Especially if your application involves something like mail notification to customers.

  • (cs) in reply to Guran
    Guran:
    Some Damn Yank:
    Guran:
    TRWTF here is production clients accessing a staging server. Few things can cause more badness than staging machines communicating with the outside world.
    True enough, but I've got a real WTF for you: staging machines that run Windows when the production machines run Linux. No shit. And everyone wondered why the web site didn't run right in production after it tested so well in development.

    Happened to me too...

    But the real fun begins when someone does a test run with production data on a not-properly-sandboxed staging server. Especially if your application involves something like mail notification to customers.

    Isn't java running everywhere?

  • (cs) in reply to Nobody
    Nobody:
    eViLegion:
    If some random cunt phoned me up and asked if I was happy for him to do something that was absolutely nothing to do with me, I'd just say "yes" in a slightly confused tone, then hang up the phone. If it turned out that the guy then murdered a high ranking politician, I'm pretty sure I'd be in the clear.

    And that's how confusion happens and how you risk getting blamed for things. You shouldn't give permission for things over which you have no authority. You should say "I don't have any control over that."

    1. Simply saying "yes" and hanging up deals with the entire thing (as far as I'm concerned) in precisely 1 second. Saying "I don't have any control over that" involves followup questions that I don't want to be allowing to enter my brain, let alone having to find answer them.

    2. He shouldn't be ringing me up in the first place, and should know who he ought to be ringing up. Therefore the entire call is a massive waste of my time. Fuck that guy.

    3. Not my confusion; I know exactly whats going on. Arguably, there is no increase in confusion - the moron starts confused, and ends exactly as confused, having maintained a steady level of confusion throughout. Although, about mid-way through, he might be lulled into a false sense of non-confusion.

    4. No risk of blame. Anyone saying "well, HE said I could do it" will be met with "I work on helpdesk... what the fuck are you talking about?".

    5. Where is the fun in cooperating with a moron? Far better to "send them to fetch some stripey paint".

  • Your Name * (unregistered)

    Help Desk installing things in production?

  • Drone (unregistered) in reply to Z

    Yeah, this smacks of someone trying to get something pushed in Q1, so he can say it got done in Q1 during reviews.

  • Bananas (unregistered) in reply to Sobachatina
    Sobachatina:
    I'll take devil's advocate here.

    This business critical app is probably generating actual revenue. R&D was down to the wire and had to get creative. They bypassed the bureaucracy and procedures and got their work done in record time.

    The loose ends were tied up by IT after the fact.

    Sure it's not ideal. Ideally you'd know about every hiccup months in advance with time to plan for them and get all the necessary approvals. In the real world sometimes you have to get things done creatively.

    The real WTF is the confrontational attitude of the PM. He got it done but he must have known it was bad. I hope his undiplomatic tone is simply embellishment to make a better story.

    Fail.

    Go live on a staging server is, by definition, a failure. Add "business critical" and "revenue stream" to the mix and it's a triple fail.

  • (cs) in reply to Your Name *
    Your Name *:
    Help Desk installing things in production?

    No. Staging. Which was used as though it was production. Which it wasn't.

  • mainframe web dev (unregistered) in reply to Drone
    Drone:
    Yeah, this smacks of someone trying to get something pushed in Q1, so he can say it got done in Q1 during reviews.

    Same thought. The PM planted the SUCCESS flag.

  • modifiable lvalue (unregistered) in reply to eViLegion

    In that case, why not have even more fun? "Please hold and I'll find out for you...". In the meantime, continue to respond to calls on other lines.

    Fifteen or twenty minutes later, pick up the phone and say "According to policy, you might want to talk to the person in charge of the production server". In this case, the idiot would probably respond with "... but it needs to be promoted NOW!", to which I'd respond "Please hold and I'll get back to you...".

    Fifteen or twenty minutes later, pick up the phone and say "According to policy, you might want to talk to the person in charge of the production server. If you explain the sense of urgency you're feeling, that system administrator might be inclined to promote your application quickly".

    Any further responses other than "Ok. Thankyou. Bye." would illicit the same response: Fifteen or twenty minutes of wasting their own time "on hold", followed by reiteration and extension of what was previously said. Isn't that how IT helpdesk works?

    CAPTCHA: quis as in "Quis es"; "Who are you?", an appropriate greeting to answer the IT helpdesk phone with.

  • Meta Commenter (unregistered) in reply to SCSimmons
    SCSimmons:
    We never point our (all internal) customers to pages hosted on our staging webserver. Never, ever. All good, right?

    Except ... we have some pages that aren't exposed to customers, that run scheduled processes (updating files on the network, sending e-mails, etc.). We never put those on the production server, since they could conceivably be exposed to customers there. We design them on development and test them on staging, then schedule the run to look for the copy on the staging server.

    So--yesterday, our development and staging environments got wiped out. (Somehow--my manager said only that 'the responsible party confessed'.) We were scrambling for quite a while before the infrastructure team got those restored from backup.

    And mere hours later, I'm told that I'm to be the new supervisor of that intranet development team. Any ideas for some initial decrees? :)

    Decree #0: Any action you take must be either be able to be undone or have sufficient backups that the prior state can be restored within 1 hour of pulling the failback clause

    Decree #1: Any time a failback clause is requried to be pulled the instigatior of the fail shall either be slapped with a wet oily fish or be required to buy the appropriate recompence for those who were affected (Food, Drinks, Movie, etc.)

  • jMerliN (unregistered)

    I think this happened to me when working in IT at least once a month. You almost need a tracking system just to figure out what systems you can't cycle because someone has some "critical" system running on it but never bothered asking us to set up a production environment for it.

    These systems also tended to be the ones that refused to cooperate on maintenance nights. Nothing screams loads of fun like patching everything only to find that some system refuses to start working again afterwards (even though in the trial run it had no trouble at all).

    A 1 hour window of work turns into 5, and those late-night support people are so caffeinated and often intoxicated, so they're not useful, but they are hilarious.

  • Chuck (unregistered)

    Aw man, the comments on this thread are golden.

  • i can has string concat (unregistered) in reply to Some Damn Yank
    Some Damn Yank:
    Guran:
    TRWTF here is production clients accessing a staging server. Few things can cause more badness than staging machines communicating with the outside world.
    True enough, but I've got a real WTF for you: staging machines that run Windows when the production machines run Linux. No shit. And everyone wondered why the web site didn't run right in production after it tested so well in development.
    You wouldn't have this problem if you follow what we do: deploying both staging and production instances on the same machine.

    Feel free to add testing / UAT instances to the machine as necessary. The environment is guaranteed to be consistent.

  • (cs) in reply to Bene
    Bene:
    I would like to line up everyone who names their servers after cars, planets, Disney characters, ex-gfs, and emoticons. For the love of God, think of the sysadmins. This naming convention will not scale and is a nightmare to maintain when your company gets big.

    Every business is different... Applying a naming scheme used for workstations in a large call centre to a small server farm in a single site business might be just as retarded as trying to use planets of the solar system for a multinational's data centre.

  • don (unregistered) in reply to tin

    "What happened to our production server, Pluto? "

  • anonymous (unregistered) in reply to don
    don:
    "What happened to our production server, Pluto? "

    Sorry, he was demoted for negligibility.

Leave a comment on “Servers Wild”

Log In or post as a guest

Replying to comment #:

« Return to Article