• ionelmc (unregistered)

    It sounds stupid in retrospect, however, that what you get if you outsource only the grunt work. Companies should outsource everything, if the gonna outsource anything at all, not just the boring stuff - otherwise they get the developers that don't give a shit. Good developers don't want to do boring grunt work.

  • (cs)

    Is it wrong to want to work there? Specifications, guidelines, test environments. If the worse the off-shore team did was point their tests at the production environment, and then only managed to slow the thing to a crawl (rather than say dropping a few tables) I say they got off lightly.

  • Rocky (unregistered)

    The real WTF are the people who run their dev-, QA- and production-servers on the same network and allows them to talk to each other...

    Any sane person in a big organization have these segregated just to avoid the above situation, not doing it leaves you open to all kinds of problems like devs testing their code which corrupts the production database.

    Every day I get one more proof that human stupidity is infinite.

  • (cs)

    Rocky has it Right!!!! There should be 100% isolation between the environments (in both directions) for any type of direct connection. All interconnects should be via a controlled "pivot" system....

    While the Developers were not blameless, the WTF clearly belongs to the team that set up the environment.

  • gigaplex (unregistered)
    An automated build system was set up. Every check-in would trigger an automated check-out, full build and full run of all unit tests; if anything failed, the commit was aborted. If there wasn't a check-in, the build-process would trigger every 30 minutes.
    That's just overkill. What's the point in running the exact same revision over and over every 30 minutes? What if commits were coming in at a rate faster than the build/test run could handle on an individual basis?
  • Vlad Patryshev (unregistered)

    Interesting. Hope they were not breaking HIPAA regulations.

    In more serious companies these developers would not even be able to reach production.

    I mean, whose fault is it? Who made it possible for 10 developers to reach internal services of a production server?

  • JM (unregistered) in reply to gigaplex
    gigaplex:
    An automated build system was set up. Every check-in would trigger an automated check-out, full build and full run of all unit tests; if anything failed, the commit was aborted. If there wasn't a check-in, the build-process would trigger every 30 minutes.
    That's just overkill. What's the point in running the exact same revision over and over every 30 minutes?
    Concurrency tests for race conditions, etc., might not fail the first time.
  • (cs)

    This happened in India.

  • Maj najm (unregistered) in reply to JM
    JM:
    gigaplex:
    An automated build system was set up. Every check-in would trigger an automated check-out, full build and full run of all unit tests; if anything failed, the commit was aborted. If there wasn't a check-in, the build-process would trigger every 30 minutes.
    That's just overkill. What's the point in running the exact same revision over and over every 30 minutes?
    Concurrency tests for race conditions, etc., might not fail the first time.

    Then you are doing it wrong. If the tests will not fail every time when the problem exist, then they are broken and must be rewritten.

  • gigaplex (unregistered) in reply to JM
    JM:
    gigaplex:
    An automated build system was set up. Every check-in would trigger an automated check-out, full build and full run of all unit tests; if anything failed, the commit was aborted. If there wasn't a check-in, the build-process would trigger every 30 minutes.
    That's just overkill. What's the point in running the exact same revision over and over every 30 minutes?
    Concurrency tests for race conditions, etc., might not fail the first time.
    While that's true, it doesn't require a rebuild every time. You could just run the tests.
    Maj najm:
    Then you are doing it wrong. If the tests will not fail *every time* when the problem exist, then they are broken and must be rewritten.
    It's impossible to guarantee that a unit test will find a race condition on every run. It's not the test that's broken, it's the application itself. Automated tests are one of many available tools, they're not the holy grail.
  • Larry Hennick (unregistered)

    Typical enterprise-flavoured bureaucracy... complete with Java.

    I'm willing to bet the real reason the developers changed the config was because the QA server was too slow.

  • TJ (unregistered) in reply to Rocky
    Rocky:
    The real WTF are the people who run their dev-, QA- and production-servers on the same network and allows them to talk to each other...

    Actually I'd say TRWTF is that it took the network team to tell them where the traffic was coming from. Packet sniffing?? Even if it were load balanced it should be sending X-Forwarded-For...

  • Panama Joe (unregistered)

    My brain kept reading "MyWidget" as "Midget". MidgetUrl, MidgetTest, and so on.

  • Conrad Shull (unregistered)

    At our health system, in the middle of a major clinical system install, a consultant asked if we had a test environment. "Sure," was the answer, "We call it Users."

  • C-Derb (unregistered) in reply to Larry Hennick
    Larry Hennick:
    Typical enterprise-flavoured bureaucracy... complete with Java.

    I'm willing to bet the real reason the developers changed the config was because the QA server was too slow.

    +1 I was thinking the exact same thing!

  • EatenByAGrue (unregistered) in reply to Larry Hennick
    Larry Hennick:
    I'm willing to bet the real reason the developers changed the config was because the QA server was too slow.

    And of course, rather than check to see whether their code was pulling down a stupid amount of data, they instead decided to potentially crash the prod system. Brillant!

  • (cs) in reply to Panama Joe
    Panama Joe:
    My brain kept reading "MyWidget" as "Midget". MidgetUrl, MidgetTest, and so on.

    As did I.

    Finally, Bryan forced them to mock up the web services to basically ignore the query parameters and just return a small set of canned records for each query.

    Here's TRWTF. When you return mock data to your unit tests, you invalidate your tests. When your tests request enough data to slow down the server, it should actually slow down the server. The correct response should have been to change the unit tests to query for a smaller set of data.

    Only QA or production-specific application id's will be permitted

    application id's what?

    But yeah, being able to access the production web services from ANY environment from production is a big WTF.

  • abcd (unregistered)
    They replied that they wanted to make sure that they could access the production systems, and accessing them was the only way.

    You can't argue with a tautology

  • Missing Something (unregistered)

    I'm not getting something.

    The developers were doing local tests against production, so why was the build server being blamed ? And why was the build time increasing ?

    Its almost like the details of this story are made up...

  • garaden (unregistered) in reply to chubertdev
    chubertdev:
    Finally, Bryan forced them to mock up the web services to basically ignore the query parameters and just return a small set of canned records for each query.

    Here's TRWTF. When you return mock data to your unit tests, you invalidate your tests. When your tests request enough data to slow down the server, it should actually slow down the server. The correct response should have been to change the unit tests to query for a smaller set of data.

    What? I thought mocks were the whole point of unit tests, so you can test one thing, not one thing and everything it talks to. Also helps discourage you from coupling things tightly.

  • (cs) in reply to garaden
    garaden:
    chubertdev:
    Finally, Bryan forced them to mock up the web services to basically ignore the query parameters and just return a small set of canned records for each query.

    Here's TRWTF. When you return mock data to your unit tests, you invalidate your tests. When your tests request enough data to slow down the server, it should actually slow down the server. The correct response should have been to change the unit tests to query for a smaller set of data.

    What? I thought mocks were the whole point of unit tests, so you can test one thing, not one thing and everything it talks to. Also helps discourage you from coupling things tightly.

    When you mock the data, you're testing the mock data, not the actual system.

    Of course, there are situations where you do want to manipulate the data for the purpose of your test, but this is not one of those cases.

  • not an anon (unregistered) in reply to chubertdev
    chubertdev:
    garaden:
    chubertdev:
    Finally, Bryan forced them to mock up the web services to basically ignore the query parameters and just return a small set of canned records for each query.

    Here's TRWTF. When you return mock data to your unit tests, you invalidate your tests. When your tests request enough data to slow down the server, it should actually slow down the server. The correct response should have been to change the unit tests to query for a smaller set of data.

    What? I thought mocks were the whole point of unit tests, so you can test one thing, not one thing and everything it talks to. Also helps discourage you from coupling things tightly.

    When you mock the data, you're testing the mock data, not the actual system.

    Of course, there are situations where you do want to manipulate the data for the purpose of your test, but this is not one of those cases.

    Actually, good mock data can be better than real data. It just often isn't, because programmers are lazy about choosing test cases. (In other words: for your unit tests, use mock data that stresses the corner-cases of your logic/algorithms in addition to hitting all the normal-case paths. Bonus points if you can hit error handlers as well.)

  • DRC (unregistered) in reply to Roby McAndrew

    My thoughts exactly. Sounds like everything was being done right for this project, which in my experience with Large Companies using Offshored Developers is the exception, not the rule.

    However, TRWTF is that all these meetings were done, fingers pointed, code audited, etc when the simplest answer was staring at them in the face the entire time: A mere configuration issue. Me personally, I've fallen victim to the same logical leaps - its not something simple...it MUST be in the code right?

  • (cs) in reply to Missing Something
    Missing Something:
    I'm not getting something.

    The developers were doing local tests against production, so why was the build server being blamed ? And why was the build time increasing ?

    Its almost like the details of this story are made up...

    Because the tests were being run on the dev-server which had a lot more horsepower than their laptops.

    Build and test time was increasing

  • Beta (unregistered)
    [The offshore developers] replied that they wanted to make sure that they could access the production systems, and accessing them was the only way.
    Not that I believe them, but if conscientious developers discover that they have access to the productions systems, they tell their bosses immediately and put in their own safeguards so that they won't **** anything up by accident.
  • (cs) in reply to not an anon
    not an anon:
    Actually, good mock data can be better than real data. It just often isn't, because programmers are lazy about choosing test cases. (In other words: for your unit tests, use mock data that stresses the corner-cases of your logic/algorithms in addition to hitting all the normal-case paths. Bonus points if you can hit error handlers as well.)

    You're still introducing needless complexities and added variables.

  • Fedaykin (unregistered)

    Sounds like a work environment that is about 8 metric shit tons better than 90% of work environments.

    • Real specifications?
    • Real architecture?
    • Real documentation?
    • Correctly architected unit tests?
    • Real continuous integration?
    • Real QA/QC?

    Only real issue is no isolation of dev and prod web service endpoints. A biggie, but I'll take any env with just that problem anyday compared to most places I've been.

    Sign me the fuck up!

  • SleeperSmith (unregistered) in reply to ionelmc

    LOLOLOLOLOLOLOLOL. How cute.

    F*** no.

    We don't care if the developers don't give a shit. Following coding standards and guidelines would be explicitly stated in the contract.

    We hold the design, code repository, procedure and process and everything because when shit like this flies, we tear up the contract and switch to one of the other gazillion code monkeys club out there. Try doing that when they have your design and specification documents and project plans.

    Trust me, you do not want to out source anything other than the part that's fit for idiots. Because as you said, they don't give a shit.

  • Cheong (unregistered) in reply to not an anon
    not an anon:
    Actually, good mock data can be better than real data. It just often isn't, because programmers are lazy about choosing test cases. (In other words: for your unit tests, use mock data that stresses the corner-cases of your logic/algorithms in addition to hitting all the normal-case paths. Bonus points if you can hit error handlers as well.)
    That's why we have 4 categories of tests in normal test suite.
    1. Expected success
    2. Expected failure
    3. Unexpected success
    4. Unexpected failure

    If the number of unexpected * is not zero, either you've broken something or some rules would have to be reviewed.

  • SimpleSimon (unregistered)

    These offshore developers don't quite grasp unit testing. Any test that's calling a real web service isn't a unit test, it's an integration test. The whole point of Integration tests is that they test integration, they are expensive to run and so shouldn't be run by automated build servers every commit or 30 minutes. A proper unit test should have mocked the web service calls.

    It should also be said that no connection config for production should ever have been checked in if the work was being done offshore!

  • the beard of the prophet (unregistered) in reply to chubertdev
    chubertdev:
    When you mock the data, you're testing the mock data, not the actual system.

    ... Which is exactly the point of a unit test. After all, it's the code you want to test, not the data. You want to verify that for a given input your code produces the expected output. And for this, you sure as hell want to mock your data; how else would you know what you are actually testing? Which code paths did your unit tests exercise? Does it cover all error conditions? Does it even cover the "all is well" code path? Hell if you know, if you are testing with live data.

    If you want to test that your code works well with whatever data happens to currently be in the real database, that's the responsibility of integration tests, as other people have already pointed out.

  • (cs) in reply to Rocky
    Rocky:
    Every day I get one more proof that human stupidity is infinite.

    I'm not sure the burden of proof lies with the case in favour of stupidity infinity. ;)

  • (cs)

    I've seen sucky leap year logic, but this takes the Load.

    1. Leap years end in 0, 2, 4, 6, 8, not just 0, 4, 8. But a year ending in any of the 5 evens is only a leap year on alternating occurrences.

    2. There's a day 60 in every year...but sometimes it's 2/29 and sometimes 3/1.

    3. And, if it is day 60, the author intended to just loop for the whole day?! After that, I need a double: WTF! WTF!

    4. And, of course, that's not the worst. Because currentDate is constant to the loop...it loops forever. Thud!!! <-- head hitting desk

    5. And to finish it off, we'd better hope the maintenance routine never encounters a permanent condition...or it's going to cause a "few" extra events...

  • (cs) in reply to the beard of the prophet
    the beard of the prophet:
    chubertdev:
    When you mock the data, you're testing the mock data, not the actual system.

    ... Which is exactly the point of a unit test. After all, it's the code you want to test, not the data. You want to verify that for a given input your code produces the expected output. And for this, you sure as hell want to mock your data; how else would you know what you are actually testing? Which code paths did your unit tests exercise? Does it cover all error conditions? Does it even cover the "all is well" code path? Hell if you know, if you are testing with live data.

    If you want to test that your code works well with whatever data happens to currently be in the real database, that's the responsibility of integration tests, as other people have already pointed out.

    That's a completely different point than what the specific test was for.

    The test was to see if it would return any data, and if you use mock data, you introduce the possibility of producing false positives. If you connect to the correct database, you remove this possible point of failure.

  • Norman Diamond (unregistered) in reply to Coyne
    Coyne:
    I've seen sucky leap year logic, but this takes the Load.
    1. Leap years end in 0, 2, 4, 6, 8, not just 0, 4, 8. But a year ending in any of the 5 evens is only a leap year on alternating occurrences.

    2. There's a day 60 in every year...but sometimes it's 2/29 and sometimes 3/1.

    3. And, if it is day 60, the author intended to just loop for the whole day?! After that, I need a double: WTF! WTF!

    4. And, of course, that's not the worst. Because currentDate is constant to the loop...it loops forever. Thud!!! <-- head hitting desk

    5. And to finish it off, we'd better hope the maintenance routine never encounters a permanent condition...or it's going to cause a "few" extra events...

    Today wasn't even a leap day, but you looped for an entire minus one day before choosing which article you would post this comment on?

    P.S. currentDate isn't constant to the loop. If you have enough memory, tomorrow's new date will be a new date.

  • Johnny USa (unregistered)

    Fucking offshore developers are useless as tits on a bull. Translate that to Hindi.

  • Yuuuuup (unregistered) in reply to SimpleSimon
    SimpleSimon:
    It should also be said that no connection config for production should ever have been checked in if the work was being done offshore!
    Not totally true. If the network is properly locked down (environments are isolated from each other), this issue would never have occurred.

    Also, connection config is fine to have checked in... or at least some semblance of it. If you're using Integrated Security, no password issues to worry about, otherwise have some parameter substitution during your deploy process (i.e. "Server=myServerAddress;Database=myDataBase;User Id=[REPLACEME_UN]; Password=[REPLACEME_PW];")

    We're using a combination of tactics:

    1. Environment is locked down
    2. Connection strings have parameter substitution
    3. We're in the process of moving towards Integrated Security
    4. Non-local-use config files are generated based off of a common config, and use xpath/xquery to modify specific pieces reliably.
  • Yuuuuup (unregistered) in reply to SimpleSimon
    SimpleSimon:
    It should also be said that no connection config for production should ever have been checked in if the work was being done offshore!
    Not totally true. If the network is properly locked down (environments are isolated from each other), this issue would never have occurred.

    Also, connection config is fine to have checked in... or at least some semblance of it. If you're using Integrated Security, no password issues to worry about, otherwise have some parameter substitution during your deploy process (i.e. "Server=myServerAddress;Database=myDataBase;User Id=[REPLACEME_UN]; Password=[REPLACEME_PW];")

    We're using a combination of tactics:

    1. Environment is locked down
    2. Connection strings have parameter substitution
    3. We're in the process of moving towards Integrated Security
    4. Non-local-use config files are generated based off of a common config, and use xpath/xquery to modify specific pieces reliably.
  • Joey USA (unregistered) in reply to Johnny USa

    Kamabakhta apataṭīya ḍēvalaparsa ēka baila para stana kē rūpa mēṁ bēkāra haiṁ.

  • Joey USA (unregistered) in reply to Johnny USa
    Johnny USa:
    Fucking offshore developers are useless as tits on a bull. Translate that to Hindi.

    कमबख्त अपतटीय डेवलपर्स एक बैल पर स्तन के रूप में बेकार हैं .

  • (cs) in reply to Joey USA
    Joey USA:
    Johnny USa:
    Fucking offshore developers are useless as tits on a bull. Translate that to Hindi.

    कमबख्त अपतटीय डेवलपर्स एक बैल पर स्तन के रूप में बेकार हैं .

    Unfortunate that this meaning is lost in translation. This is better and more relevant translation. IN real India nobody is using word अपतटीय for offshore.

    सब्से बेकार है ओफ्फ्शोरे देवेलोपेर इतना बेकार तो बैल का स्तन भि नही।

  • Joe (unregistered)

    TRWTF is having an offshore team doing your development and never doing a single code review.

Leave a comment on “Dropping a Load”

Log In or post as a guest

Replying to comment #439646:

« Return to Article