• Da' Man (unregistered) in reply to I_M_Noman
    I_M_Noman:
    I think I used to work for that company too...
    *sigh* I am afraid I'm going to start working for this company (or its older brother) from 1st of March - or May - or April... depends on when the HR dept. finds the time to put the right stamp on the right form.

    Oh heck. It might even take until June or so...

    :-/

  • (cs) in reply to Anonymous Coward
    Anonymous Coward:
    Probably a stupid question, but what's the difference between a testing and staging server?

    You have code, code which works and is in production. Now you get a bug report, have to add a feature, whatever. So you start a branch of the production code on the dev machine, and get busy writing.

    When you want to make sure that code does what it's intended to do, doesn't affect any other parts of the app, etc, you move it from the dev machine to the testing machine. There you poke in whatever ways are necessary. QA and devs also get a chance to poke at stuff here.

    Once the code is verifiably working (and doing no other harm), it is frozen (no more enhancements, bugs fixed, etc) and it is moved from testing to staging. This is where you invite a few end users (the rest of your group, including some no-tech folks most probably) to test drive the changes. Stuff in staging is basically "in beta". URLs are fairly stable, the apps don't go up and down because you're restarting things all the time like on dev or testing, etc. Depends on where you work, but another security review might also happen here.

    Once you know everything is 100% ready to go, nothing is going to break, people love the new features, you move from staging to production. You don't forget to let your admins know that the app is going to go down for a minute, lest they get paged. And because you care, you have looked at traffic patterns and are doing the production integration at your absolute off-peak, to minimize impact on ends users. Thankfully, you were mentored by the King Of CYA, and have a rollback plan should it be necessary to "downgrade" to the last rev of your production app. The code it integrated, QA does their smoke tests, and if it all works, you ask the admins to keep an eye on it and then you go home and drink it off.

    That's a perfect world. What actually happens is that some bozo has shit running out of his home directory and because it's "in production" you need heavy duty earth movers to get it on a real server since mgmt doesn't realize that it's flaky. All they know is that "it's working great now" and they can't risk any downtime. No amount of technically correct reasoning can convince the VP of sales that it's ust fine to move his customer-facing app. He'll veto your arguments by saying to the CEO, "Bob, I trust the IT guys, but I just don't see how we can risk losing a customer because the app is down..."

    The guy who built it quits, and stuff breaks left and right, because an undocumented feature was that the guy copied files around every morning to keep things working. Or his /home/luser directory was archived and deleted when HR terminated his employment. So now you have a bunch of guys copying shit off tape, only to find that it's not all there, permissions are whacked, there were other files in /usr/local that the app needed, whatever.

    Or maybe the guy does like in the story and runs crap off his workstation. He doesn't bother using source control, and instead just uses a very intuitive sequential numbering system. Or the final executable is happily named "app-working.exe" so that everyone knows it's the good one that should be in production. He could also just append dates to the app name. That's really helpful, since the last rev is always the production copy.

    The best part about the above scenario is that the guy's desktop box will wind up living racked up sideways in a datacenter, in a rack nicknamed "the graveyard" by the NOC staff. Nobody knows how to restart it should power get shut off, so the admins taped the top of a water bottle over the power button and put a note on it. It'll be known as "The Dell Desktop You Don't Touch" and folks will be more than happy to pretend it doesn't exist (and that it's lesser-quality desktop power supply stays running for years and years to come).

    Once it does go down (and it will, believe me, it will), there will be no less than 8 admins -- some of whom are very senior -- who will spend 6 hours to bring it back up correctly and test it very roughly. The total cost in man-hours and downtime work out to roughly 1/35th what it would have cost to move it to a real server environment and gin up a little documentation. But because they couldn't risk the downtime, you recall, they never did that.

    The senior IT guy there shoots off angry emails calling the VP of Sales an ignorant twit, and he wants that desktop shit outta his datacenter but pronto, thank you very much.

    So a committee is formed and all sorts of buy-in are are gathered. Months later, everyone's forgotten about the little box (the NOC guys more so than anyone pretend there's empty where it's racked) and so nothing gets done. Then it goes down again...

  • (cs) in reply to Vlad Patryshev
    Vlad Patryshev:
    Yeah... and imagine it is your 20% project at Google. And your expected QPS is about 90. And your script you want to deploy is 7.5k zipped. And for 3 years you hear that there are no servers capacity to deploy and serve 7.5k 90 times a second. And you have to follow the rules that change while you follow them. And it is all your fault.

    See if they still have the SDE Open House meetings every Thursday. Bring your docs, and show them a demo. Might help to bring your PM as well. If that fails, get tickets to a Steelers game and bribe the main SDE deployment guy at the Open House (he'll be wearing a hoodie and soccer shoes).

    Failing that, search the wiki for how to deploy a service. Code up the packaging, monitoring, clustering stuff (you know the code words) yourself, make it easy for them to deploy.

    But you're gonna need a resource allocation and security review regardless...

  • (cs) in reply to BOOM!
    BOOM!:
    i think the word u're looking for is boon.

    I think the word you're looking for is "you're".

  • bleh (unregistered)

    This sounds like Raytheon.

  • Max (unregistered) in reply to Sean
    Sean:
    jo42:
    What I want to know is how did Bruce W. manage to get the project approved in the first place?

    My general policy is "it's easier to ask for forgiveness than permission". Bruce W. probably started working on the application, deploying to the machines he had, before he even put in a project proposal. That way, when you go to the project approval meeting and your boss's boss's boss's...boss asks what the timeframe is, you can say "it's already done" (since you had the three weeks it took to get an approval meeting scheduled to work).

    Oh, sure, they'll get a little red in the face, but they won't sh**-can a project that's already done.

    I've been in companies where they would fire you on the spot for breaking a process intended to control risk within a large company. Sure, it is easier to ask forgiveness... but the consequences are much harsher when the answer is "No."

  • BOOM! (unregistered) in reply to cconroy
    cconroy:
    BOOM!:
    i think the word u're looking for is boon.

    I think the word you're looking for is "you're".

    hahah, cconroy, ura FAIL!

  • (cs) in reply to BOOM!
    BOOM!:
    x:
    that's right:
    basically any multi hundred million dollar corporation will have this red tape. There is a reason this bureaucracy exists, a lot of it is CYA, however, if something does go wrong they need to know WHERE it went wrong, WHY it went wrong, and (depending on what it was) WHO went wrong... so that something like that does not happen again.

    A "simple outage" (hardware conflict, driver conflict, a process not starting etc) can cost a company hundreds of thousands if not millions of dollars. Sure the red tape is expensive as well as getting approval is slow, but mistakes can be even more expensive.

    depending on the company, maybe, but such companies don't allow "outages" if they have a half way intelligent design team. ala google.

    for the rest of the world, half the IT infrastructure or more was probably created specifically for the red tape to be properly implemented. in such a case one might argue and outage is a boom to productivity as all of a sudden, the tape dispenser is empty and people are....shudder....forced to think....

    i think the word u're looking for is boon.

    With or without the Administratium doggle?

  • (cs)

    I requested a stapler... and I still haven't received my paycheck yet...

  • Anonymous Coward (unregistered) in reply to Anonymous Coward
    Probably a stupid question, but what's the difference between a testing and staging server?

    A test-environment is a setup where every developer can change all parts of the system. A test-environment can break and nobody cares. Test-environments are used where complex interactions cannot be tested on your development machine (f.ex. cases where many computers must interact).

    A staging-environment is a mirror copy of your production-environment. Staging is used to assure that all integrations, from a production point of view, work as they should. A staging-environment can generally not be changed by the developer. Staging environments are also used to do UAT (User Acceptance Tests) before committing to production.

    A production-environment is off-limits for developers (or it should be).

  • (cs) in reply to Anonymous Coward
    Anonymous Coward:
    Probably a stupid question, but what's the difference between a testing and staging server?

    A test-environment is a setup where every developer can change all parts of the system. A test-environment can break and nobody cares. Test-environments are used where complex interactions cannot be tested on your development machine (f.ex. cases where many computers must interact).

    A staging-environment is a mirror copy of your production-environment. Staging is used to assure that all integrations, from a production point of view, work as they should. A staging-environment can generally not be changed by the developer. Staging environments are also used to do UAT (User Acceptance Tests) before committing to production.

    A production-environment is off-limits for developers (or it should be).

    No no no no no.

    A "testing" is where somebody posts a slightly naive, but worthy, question, and a bunch of self-appointed experts mess around making random comments without actually checking whether anybody else has correctly answered the question. Good examples of a "testing" would include (a) most talking-head programs on TV; (b) this site; (c) anything that gives rational human beings a headache, whilst lacking any useful side-effects, such as an increase in reading comprehension.

    A "staging server" is somebody who does the make-up in a dressing room before the show.

    Or it might be an environment in which questions are asked in a realistic and customer-facing way, without letting ignorant yahoos confuse the issue.

    It might be. It might be. There's sod-all evidence of it around here, though.

  • Adrian Pavone (unregistered) in reply to akatherder
    akatherder:
    Anonymous Coward:
    Probably a stupid question, but what's the difference between a testing and staging server?

    "Testing" is where the developers test our changes basically to make sure all of our code made it from one environment to the next (i.e. did all of the code files, images, sp's, etc get promoted). "Staging" is where the actual testers make sure the code works, meets all of the business rules, and does what it's supposed it to.

    That's just what we do here.

    From what we did, and how I understand the terms are meant to be used, "Testing" environments are for OAT (Operational Acceptance Testing), which is effectively testing that the system does it's job properly, whereas "Staging" environments are for UAT (User Acceptance Testing), which is effectively testing that the system does the proper job.

  • Scooby Doo (unregistered) in reply to I_M_Noman

    I used to work at State Farm Insurance, too.

  • (cs) in reply to ThomsonsPier
    ThomsonsPier:
    powerlord:
    "big if a deal!"

    Is that supposed to be "big of a deal" or "big effin' deal" ? ;)

    The normal form here would be 'not that big a deal'. The 'of' (or 'if') has always seemed superfluous to me.
    The 'that' isn't a big deal either.

  • Russ (unregistered) in reply to wee
    wee:
    Anonymous Coward:
    Probably a stupid question, but what's the difference between a testing and staging server?

    You have code, code which works and is in production. Now you get a bug report, have to add a feature, whatever. So you start a branch of the production code on the dev machine, and get busy writing.

    When you want to make sure that code does what it's intended to do, doesn't affect any other parts of the app, etc, you move it from the dev machine to the testing machine. There you poke in whatever ways are necessary. QA and devs also get a chance to poke at stuff here.

    Once the code is verifiably working (and doing no other harm), it is frozen (no more enhancements, bugs fixed, etc) and it is moved from testing to staging. This is where you invite a few end users (the rest of your group, including some no-tech folks most probably) to test drive the changes. Stuff in staging is basically "in beta". URLs are fairly stable, the apps don't go up and down because you're restarting things all the time like on dev or testing, etc. Depends on where you work, but another security review might also happen here.

    Once you know everything is 100% ready to go, nothing is going to break, people love the new features, you move from staging to production. You don't forget to let your admins know that the app is going to go down for a minute, lest they get paged. And because you care, you have looked at traffic patterns and are doing the production integration at your absolute off-peak, to minimize impact on ends users. Thankfully, you were mentored by the King Of CYA, and have a rollback plan should it be necessary to "downgrade" to the last rev of your production app. The code it integrated, QA does their smoke tests, and if it all works, you ask the admins to keep an eye on it and then you go home and drink it off.

    That's a perfect world. What actually happens is that some bozo has shit running out of his home directory and because it's "in production" you need heavy duty earth movers to get it on a real server since mgmt doesn't realize that it's flaky. All they know is that "it's working great now" and they can't risk any downtime. No amount of technically correct reasoning can convince the VP of sales that it's ust fine to move his customer-facing app. He'll veto your arguments by saying to the CEO, "Bob, I trust the IT guys, but I just don't see how we can risk losing a customer because the app is down..."

    The guy who built it quits, and stuff breaks left and right, because an undocumented feature was that the guy copied files around every morning to keep things working. Or his /home/luser directory was archived and deleted when HR terminated his employment. So now you have a bunch of guys copying shit off tape, only to find that it's not all there, permissions are whacked, there were other files in /usr/local that the app needed, whatever.

    Or maybe the guy does like in the story and runs crap off his workstation. He doesn't bother using source control, and instead just uses a very intuitive sequential numbering system. Or the final executable is happily named "app-working.exe" so that everyone knows it's the good one that should be in production. He could also just append dates to the app name. That's really helpful, since the last rev is always the production copy.

    The best part about the above scenario is that the guy's desktop box will wind up living racked up sideways in a datacenter, in a rack nicknamed "the graveyard" by the NOC staff. Nobody knows how to restart it should power get shut off, so the admins taped the top of a water bottle over the power button and put a note on it. It'll be known as "The Dell Desktop You Don't Touch" and folks will be more than happy to pretend it doesn't exist (and that it's lesser-quality desktop power supply stays running for years and years to come).

    Once it does go down (and it will, believe me, it will), there will be no less than 8 admins -- some of whom are very senior -- who will spend 6 hours to bring it back up correctly and test it very roughly. The total cost in man-hours and downtime work out to roughly 1/35th what it would have cost to move it to a real server environment and gin up a little documentation. But because they couldn't risk the downtime, you recall, they never did that.

    The senior IT guy there shoots off angry emails calling the VP of Sales an ignorant twit, and he wants that desktop shit outta his datacenter but pronto, thank you very much.

    So a committee is formed and all sorts of buy-in are are gathered. Months later, everyone's forgotten about the little box (the NOC guys more so than anyone pretend there's empty where it's racked) and so nothing gets done. Then it goes down again...

    Then someone who isn't completely retarded joins the IT folks and pops in a vmware converter cd into the workstation, moves it over to their VMX cluster and never has to worry about it again.

  • (cs)

    So... he works at a large corporation AND there's lots of red tape involved? Craaaaazyyy story mannnnn

    WTFs:

    1. Asking for 4 servers 9 weeks before your multi-million dollar project goes into production. Yeah, 2 months probably is enough time to get it done, but given that you know navigating bureaucracies in large organizations would be tricky, why leave it to chance? Start ordering those servers 12 months in advance! Or more! Millions of dollars people!

    2. In week 6, there are 3 weeks left until production. In week 8, the system goes live. Did they move their deploy date forward one week in the face of not having enough resources?

    3. The stickies on the ghetto rig should say "NO TOCAR" instead of do not touch. Unless this is in some bizarro world where the cleaning people speak english.

    4. "Cleaning staff misfeance"... apparently TDWTF lives in a bizarro world where the "writers" don't speak english. Are you implying that the cleaning staff might not follow the legal code when vacuuming?

    5. '"WHY DIDN'T YOU CALL ME SIX %($$!*& WEEKS AGO???" the IT head blasts' Indeed. WTF was he waiting for?!

  • common sense (unregistered)

    Hey Bruce ...... take a hint and RESIGN you clown

  • Nathan (unregistered)

    "The real WTF" is Alex's ridiculous and poorly written dramatizations. These stories are entertaining and informative enough in real life. Why exaggerate and change them so much? It's less interesting when half the story is made up.

  • Michael (unregistered) in reply to Sean
    Sean:
    Oh, sure, they'll get a little red in the face, but they won't sh**-can a project that's already done.

    The naivety is so charming. Can we take him home, Mom? I promise all my TPS reports will be done on time!

  • Franz Kafka (unregistered) in reply to savar
    savar:
    So... he works at a large corporation AND there's lots of red tape involved? Craaaaazyyy story mannnnn

    WTFs:

    1. Asking for 4 servers 9 weeks before your multi-million dollar project goes into production. Yeah, 2 months probably is enough time to get it done, but given that you know navigating bureaucracies in large organizations would be tricky, why leave it to chance? Start ordering those servers 12 months in advance! Or more! Millions of dollars people!
    What the hell for? Even in my company, it only takes a month to get hardware. If it takes that long, then let it fail - you shouldn't have a 4 server allocation dominate a 2 month project. Seriously, you have a broken company.
    3) The stickies on the ghetto rig should say "NO TOCAR" instead of do not touch. Unless this is in some bizarro world where the cleaning people speak english.

    The cleaning staff where I work now is ex-soviet block. Other places, it's mexicans. If you can't lock them away from the servers, choose your sign carefully, or install a funny shaped locking plug for the PC.

    5) '"WHY DIDN'T YOU CALL ME SIX %($$!*& WEEKS AGO???" the IT head blasts' Indeed. WTF was he waiting for?!

    Funny, I thought he did. Anyway, are you going to jump on a director because their org is being a shit? Depending on the culture, they might just decide you're being a pain.

  • (cs) in reply to Sean
    Sean:
    jo42:
    What I want to know is how did Bruce W. manage to get the project approved in the first place?

    My general policy is "it's easier to ask for forgiveness than permission".

    But then it's easier to ask permission than find a new job. Thus the world is a paradox, Let us all eat cake and rejoice!

  • (cs) in reply to rfsmit
    rfsmit:
    ThomsonsPier:
    powerlord:
    "big if a deal!"

    Is that supposed to be "big of a deal" or "big effin' deal" ? ;)

    The normal form here would be 'not that big a deal'. The 'of' (or 'if') has always seemed superfluous to me.
    The 'that' isn't a big deal either.
    'not big a deal'?

    Sorry, by 'here' I meant in England, rather than in this example. It's largely irrelevant, but gives me an excuse to post again and waste some more time.

  • Bic0 (unregistered)

    Bruce, do we work for the same company? Today 2 new 500GB HDDs for our desktops arrived - after 3 months.

  • Jim Lard (unregistered)

    Wow this sounds just like the project I've been working on, except we needed a dozen standard Windows 2003 servers with NO webservers or extra software, and it took IT nine whole months to install them.

  • (cs)

    Oh, the pain... and the company in question has a 3-letter name - a vowel and two consonants, yes?

    One part this story missed that I've experienced is that about half way through the process, the project team members are rotated off to other projects and new people are assigned. The new people don't understand what you're trying to get set up, so they do nothing until you poke them with a sharp stick. Then new meetings are called, and all the questions you thought you'd dealt with resurface, to be laboriously strangled again.

    Breath in... breath out. Where is my glass? Oh no, it's empty!

  • Fred (unregistered)

    This system has a name CMMI level 5. and has a sister. Prince2.

  • IHasYerCheezburger (unregistered) in reply to jimi
    jimi:
    The real WTF is all the changes between first and third person.
    That is because the story was originally written in first person by "Bruce", and then Alex rewrites it to third person. But not always bugfree.
  • JohnB (unregistered)

    The real WTF is the absense of any mention of SLAs. I, too, work in a mega corp with various levels of approvals, etc. And any time I want something done by another department, I just ask for the SLA. The issue isn't that it took 20 weeks to get something done, the issue is that timeframes were envisioned that had no foundation.

  • JohnB (unregistered) in reply to RBoy
    RBoy:
    What does your comment require? -To be first . . . Comment form approved. ...

    "First"

    This is, by far, the very very best comment I have read here. There will be a smile on my dial for the rest of this week -- possibly longer.

  • (cs) in reply to Stormrider
    Stormrider:
    Surely the real WTF is why you need 4 WEB servers without an internet connection?
    Intranet?
  • foo (unregistered) in reply to Sean
    Sean:
    My general policy is "it's easier to ask for forgiveness than permission".

    I borrowed your your wife. Forgiveness please?

    That phrase really annoys me, as it's bandied about loads, but doesn't often work in practice. You see, that principle only applies when both parties agree that what you were doing was reasonable and acceptable, and you would have been given permission anyway. The solution isn't to do things without asking, it's to streamline the approval process.

  • Dan (unregistered)

    TRWTF is that Bruce got yelled at period. He did his job on time and in budget, unless some other information was left out. He'd been bugging people, including the IT head, for weeks to get 4 servers. Next time, he should just park himself in the IT head's office and claim he's not leaving, or showering, until he gets what he wants.

  • (cs) in reply to savar
    savar:
    So... he works at a large corporation AND there's lots of red tape involved? Craaaaazyyy story mannnnn

    WTFs:

    1. Asking for 4 servers 9 weeks before your multi-million dollar project goes into production. Yeah, 2 months probably is enough time to get it done, but given that you know navigating bureaucracies in large organizations would be tricky, why leave it to chance? Start ordering those servers 12 months in advance! Or more! Millions of dollars people!
    <snip>
    1. "Cleaning staff misfeance"... apparently TDWTF lives in a bizarro world where the "writers" don't speak english. Are you implying that the cleaning staff might not follow the legal code when vacuuming?

    Just twelve months, savar? Why not think ahead? He should be asking right now for all servers he could possibly need 10 years ahead. God only knows how long it would take in 2019 to request a mere server. </sarcasm>

    We are sooo quick to jump to conclusions today, aren't we? Let's suppose Bruce asks for the servers a year before the project starts. Of course there's the obvious outdating thing, which won't be a big issue this time because his project doesn't demand top-notch hardware. But I wonder if he could find a gipsy to foretell him that there's a project coming up and he needs to request 4 servers at once. If you know any, please share with us.

    Of course, maybe this project could be seen a year before and Bruce just didn't think ahead. But maybe not and it was a surprise for Bruce and everyone his level. And maaaaybe, just maybe, this project could be assigned to a lot of PMs and it just happened that the PM of choice was Bruce. Now, tell me how would your boss react if he found out later that you asked for resources to a project that got assigned to someone else.

    As for the cleaning staff "misfeance", someone already explained its correct meaning up above. I won't bother to look for it, but you really should.

  • (cs)

    Bruce OBVIOUSLY is not much of a fan of this site, or has adblock on. Hasn't he seen the softlayer ads? he can get double disk space and bandwidth on his new server from them...

  • Been there (unregistered) in reply to Jon

    That's called studying ;)

  • Lego (unregistered) in reply to wee
    wee:
    Anonymous Coward:
    Probably a stupid question, but what's the difference between a testing and staging server?

    You have code, code which works and is in production. Now you get a bug report, have to add a feature, whatever. So you start a branch of the production code on the dev machine, and get busy writing.

    When you want to make sure that code does what it's intended to do, doesn't affect any other parts of the app, etc, you move it from the dev machine to the testing machine. There you poke in whatever ways are necessary. QA and devs also get a chance to poke at stuff here.

    Once the code is verifiably working (and doing no other harm), it is frozen (no more enhancements, bugs fixed, etc) and it is moved from testing to staging. This is where you invite a few end users (the rest of your group, including some no-tech folks most probably) to test drive the changes. Stuff in staging is basically "in beta". URLs are fairly stable, the apps don't go up and down because you're restarting things all the time like on dev or testing, etc. Depends on where you work, but another security review might also happen here.

    Once you know everything is 100% ready to go, nothing is going to break, people love the new features, you move from staging to production. You don't forget to let your admins know that the app is going to go down for a minute, lest they get paged. And because you care, you have looked at traffic patterns and are doing the production integration at your absolute off-peak, to minimize impact on ends users. Thankfully, you were mentored by the King Of CYA, and have a rollback plan should it be necessary to "downgrade" to the last rev of your production app. The code it integrated, QA does their smoke tests, and if it all works, you ask the admins to keep an eye on it and then you go home and drink it off.

    That's a perfect world. What actually happens is that some bozo has shit running out of his home directory and because it's "in production" you need heavy duty earth movers to get it on a real server since mgmt doesn't realize that it's flaky. All they know is that "it's working great now" and they can't risk any downtime. No amount of technically correct reasoning can convince the VP of sales that it's ust fine to move his customer-facing app. He'll veto your arguments by saying to the CEO, "Bob, I trust the IT guys, but I just don't see how we can risk losing a customer because the app is down..."

    The guy who built it quits, and stuff breaks left and right, because an undocumented feature was that the guy copied files around every morning to keep things working. Or his /home/luser directory was archived and deleted when HR terminated his employment. So now you have a bunch of guys copying shit off tape, only to find that it's not all there, permissions are whacked, there were other files in /usr/local that the app needed, whatever.

    Or maybe the guy does like in the story and runs crap off his workstation. He doesn't bother using source control, and instead just uses a very intuitive sequential numbering system. Or the final executable is happily named "app-working.exe" so that everyone knows it's the good one that should be in production. He could also just append dates to the app name. That's really helpful, since the last rev is always the production copy.

    The best part about the above scenario is that the guy's desktop box will wind up living racked up sideways in a datacenter, in a rack nicknamed "the graveyard" by the NOC staff. Nobody knows how to restart it should power get shut off, so the admins taped the top of a water bottle over the power button and put a note on it. It'll be known as "The Dell Desktop You Don't Touch" and folks will be more than happy to pretend it doesn't exist (and that it's lesser-quality desktop power supply stays running for years and years to come).

    Once it does go down (and it will, believe me, it will), there will be no less than 8 admins -- some of whom are very senior -- who will spend 6 hours to bring it back up correctly and test it very roughly. The total cost in man-hours and downtime work out to roughly 1/35th what it would have cost to move it to a real server environment and gin up a little documentation. But because they couldn't risk the downtime, you recall, they never did that.

    The senior IT guy there shoots off angry emails calling the VP of Sales an ignorant twit, and he wants that desktop shit outta his datacenter but pronto, thank you very much.

    So a committee is formed and all sorts of buy-in are are gathered. Months later, everyone's forgotten about the little box (the NOC guys more so than anyone pretend there's empty where it's racked) and so nothing gets done. Then it goes down again...

    Anyone work in a shop, of more than a couple of dozen guys, where this story isn't familiar? This could be any company in the Fortune 1000.

    -Lego

  • Zaphod (unregistered) in reply to Stormrider
    Stormrider:
    Surely the real WTF is why you need 4 WEB servers without an internet connection?

    4 basic environments; dev, test, qa, prod. They will go by somewhat different names at different companies.

    dev: your sandbox. test: sandbox for you, all the other devs, the admins. qa: user & implementation testing prod: live stuff.

    Typically only the biggest companies will have 4 discrete environments. Smaller companies often get by with 2.

    z

  • (cs) in reply to Russ
    Russ:
    wee:
    Anonymous Coward:
    Probably a stupid question, but what's the difference between a testing and staging server?

    You have code, code which works and is in production. Now you get a bug report, have to add a feature, whatever. So you start a branch of the production code on the dev machine, and get busy writing... <snip/> and so nothing gets done. Then it goes down again...

    Then someone who isn't completely retarded joins the IT folks and pops in a vmware converter cd into the workstation, moves it over to their VMX cluster and never has to worry about it again.

    You really had to quote all that, just to make a completely unrelated comment of dubious troll-worthiness?

    Dear Lord. We need to start charging for real estate round here. Maybe even with derivatives and stuff.

  • (cs) in reply to Lego
    Lego:
    Anyone work in a shop, of more than a couple of dozen guys, where this story isn't familiar? This could be any company in the Fortune 1000.

    -Lego

    Ditto.

    I'm feeling a KenW moment coming on here.

    Should you wish to rejoin the human race, there are plenty of opportunities out there. Charities, post-doctoral theses, that sort of thing.

    Alternatively, you could just stay under that thar bridge.

    Or go work for a company with twenty four guys, a girl, and a pizza. Mmmmm ... donuts.

    Damn, just a pizza with anchovies on it.

  • Bruce W (unregistered) in reply to Dan
    Dan:
    TRWTF is that Bruce got yelled at period. He did his job on time and in budget, unless some other information was left out. He'd been bugging people, including the IT head, for weeks to get 4 servers. Next time, he should just park himself in the IT head's office and claim he's not leaving, or showering, until he gets what he wants.

    Of the entire story, that was the only part that was embellished. The Senior VP that rolled the heads was very supportive of me. He actually called me a couple days later to make sure I was getting what I needed.

  • Dan (unregistered) in reply to Bruce W
    Bruce W:
    Dan:
    TRWTF is that Bruce got yelled at period. He did his job on time and in budget, unless some other information was left out. He'd been bugging people, including the IT head, for weeks to get 4 servers. Next time, he should just park himself in the IT head's office and claim he's not leaving, or showering, until he gets what he wants.

    Of the entire story, that was the only part that was embellished. The Senior VP that rolled the heads was very supportive of me. He actually called me a couple days later to make sure I was getting what I needed.

    Oh well. Glad to hear it. :)

  • Lynx (unregistered) in reply to JustACodeMonkey
    JustACodeMonkey:
    We had been running one server as both our QA And production environment. But once we had apps that people were actually using we tried to get a QA server it took nearly a year to aquire a VMWare server running IIS With a total cost of $3US per year. The cost in time spent getting the server over $175K....
    Oh boy. This story reminds of mine from a couple of years back...

    So I inherited a popular issue tracking app on a recycled server. It's decently setup and configured, but the original "boss" wanted user buy-in so didn't set much constraints, resulting in an organic (and chaotic) growth. When the org decided to "officialize" this, they decided on an "office-hours, best effort" SLA -- i.e., after work I can tell users to f**k off. But the down side is that this went to a bunch of infra guys who have absolutely no experience with the setup, and not willing to learn due to political issues.

    So fast forward a year, a lot of crap happened, but we got to grips with the production environment. It finally got to the point where I didn't need to worry excessively about whether I can revive the app after it dies, so we can look towards making real progress in utilizing the app. At this point, I started putting my foot down about having a second environment to play around with.

    (Yes, you got that right, I had been playing with direct and immediate changes to a production environment for a whole year...)

    So I duly started the process of talking to my infra folks, who are nice enough to suggest some (good) suggestions. My HOD balked at the cost though (all of about 500 USD!), and the whole thing just draaaaaged. In the end, I got so fed up with the antics that I simply just installed all the necessary into a PC -- at my own time at that! -- and set up my development/ testing environment, and dropped the subject. The cost for me to research how to do things, and to actually do it was, IIRC, about $750 in man hours!

    This is the same bloody HOD who, in the last year, made me go through a $1000 effort (in man hours) procurement process just to save $500.

    The kicker? Recall how you guys would complain about governance or QA people who insist on pushing the process onto you? I'm in governance/ QA!

    To be honest, I find that the process is rarely the problem; the problem really lies in pig-headed people who don't understand what the process really requires, and to flexibly fulfill the requirements. They jump to conclusions and follow through on very silly things for the sake of following something written in general, without due consideration for situational differences.

  • TheFixer (unregistered)

    WOW. I thought I was reading my biography! This is EXACTLy what is happening to me ... I am now the red-headed stepchild and 'have the plague'

  • tragomaskhalos (unregistered)

    To add an extra spoonful of sewage into the mix, the MegaCorp can outsource part or all of this process, so that when your environments do eventually turn up they're guaranteed to be buggered up somehow. Not that I'm speaking from experience or anything ....

  • (cs) in reply to Nathan
    Nathan:
    "The real WTF" is Alex's ridiculous and poorly written dramatizations. These stories are entertaining and informative enough in real life. Why exaggerate and change them so much? It's less interesting when half the story is made up.

    QFT

  • (cs)

    My $0.02.

    The project was escalated to Division IT head in Week 4. That says to me "This project now has a sponsor." Starting in week 5, every week, Divison IT head gets an update. Would have avoided angry sponsor in week 16 and probably shortened things to week 10 or so.

    So yes, there was a FAIL here by story submitter. Not epic, but nonetheless.

  • Moto? (unregistered)

    Motorola, is that you?

  • Zuzu (unregistered)

    Share and Enjoy!

  • KaptWilly (unregistered)

    Hey, no fair. Those big black clips add an inordinate amount of h to the paper stack.

  • Thomas (unregistered) in reply to that's right

    "A "simple outage" can cost a company hundreds of thousands if not millions of dollars."

    Depends how you count it. Lost virtual sales isn't a cost and counting it like that sounds very much similar "creative accounting" that *AA/BSA uses when counting the "cost" of unauthorized copying: "every copy is a lost sale" and "every PC existing without valid XP/Vista licence is having a pirated one"

    " Sure the red tape is expensive as well as getting approval is slow, but mistakes can be even more expensive. "

    No they can't, by long run. You see, this approval process makes every project a multi-year adventure instead of multi-week processes they should be and increases the cost of all projects 10 to 100 fold.

    Multiply that by amount of projects and you get tens of millions and that is real money, not virtual. But of course, accountants never count the time lost in bureacracy because they like them.

    Other people's time is always worth zero to any bureucrat. Even in a company.

Leave a comment on “The Mega Bureaucracy”

Log In or post as a guest

Replying to comment #241462:

« Return to Article