• First (unregistered)

    First

  • Dude (unregistered)

    I'm not going to tell you how to do your jobs, because I don't know.

    That's a first.

  • DocMonster (unregistered)

    And then he realized that he would never be able to make the changes needed to fix the app, and a year in quit for greener pastures.

    THE END.

    Proof that knowing a great language (Ruby in this case) doesn't make you a great developer.

  • Quite (unregistered) in reply to DocMonster

    Probably more like: "... then Rick called him out for committing unauthorised code changes to production and he was fired immediately."

  • Sad Language Puppy (unregistered)

    TRWTF is using "performant" non-ironically.

  • Lazerbaems (unregistered) in reply to Quite

    I didn't realize that pull requests went straight to production... time to hax some linux!

  • Anon-Anon (unregistered)

    I'll never quite understand why every time developers start trying to resolve a performance problem, they never start by doing basic performance testing. It's always some pet peeve of theirs that gets thrown on the chopping block, and never the thing that's actually causing issues.

  • Mason Wheeler (unregistered)

    Saw that one coming. As soon as it said "Ruby," well... some things just go without saying.

  • DocMonster (unregistered) in reply to Anon-Anon

    Because "issues" are a great chance to condemn Technology X that you never liked but were forced to use, and laud Technology Y which you prefer and will solve ALL THE PROBLEMS.

    Seriously though I agree. But also it's the fact that usually when there are issues, you're under such a crunch to find and resolve that you are basically forced to pick something at random to point to as the culprit, because you can't "waste time" actually investigating things because BIG CLIENT is screaming at your boss' boss' boss that THIS NEEDS TO BE DONE RIGHT NOW OR WE'RE PULLING OUR CONTRACT.

  • Carlos Moran (google) in reply to DocMonster

    And if by chance the big client is not screaming like that, you still have to spend time explaining to your boss that you need to perform at least a bit of performance testing and find out which component is actually slowing things down, rather than just assuming things, like "component X was slow last time we looked so we need to rewrite it".

  • Herby (unregistered)

    When "optimizing" be very sure that you understand the pitfalls. Often times you end up looking at the wrong module. A little performance analysis goes a long way. Then you need to be sure you aren't optimizing the "idle loop" (which I've heard stories about!).

    Then again, as the saying goes: "When you are up to your ass in alligators, you forget that the original objective was to drain the swamp".

  • Developer Dude (unregistered) in reply to Anon-Anon

    I keep telling people that if they don't measure something, they don't know - but they just nod and keep making assumptions without any metrics.

    I AM SO A ROBOT!

  • Essic (unregistered)

    Truly shameful ... it's like a band of kids, playing with dirt ... Thanks an adult was around.

  • isthisunique (unregistered)

    It's apparent very early on in this story what the problem is. No one is actually measuring anything. Simply guess the cause then go and fix it.

    Caching is a funny one. People often use things like ROR for RAD. RAD can sometimes have huge performance issues. The solution often becomes caching, not only for ROR but for a lot of things, sometimes for things that are beneficially slow (it's a trade of). Getting caching, denormalisation, eventual consistency, etc to all work right is another ballgame though. People do get it wrong.

    One of my favourites with the elastic stuff is that people use it as part of a RAD philosophy to pretty much forego optimisation altogether. I've actually seen people talk about how cool then are being able to just do a simple N^2 implementation and not have to worry because it'll run on elastic.

  • Matt Westwood (unregistered) in reply to isthisunique

    I was on a project once where a system of cache maintenance had been implemented that -- when working properly -- turned out to actually be a performance bottleneck. Oh how we laughed! I suspect that the caching mechanism that is the subject of this anecdote is a bit like that, which is why they ripped it out last year as part of the rewrite -- they just missed a bit.

    I also wonder whether all that bs added to the story is just fluff to make OP look even better. Hey look everybody, I found the problem! Not only did I find the problem, but I fixed it without anybody noticing! Sorry, 'fraid it makes OP come across as a bit of a dick.

  • PaulMurrayCbr (unregistered)

    I had a similar thing happen in another job.

    Wonderboy decided to write his own security layer. This layer checked a password every call to any database method. The sys user had a super-secure generated password that was 2048 bytes long. Since this password was seeded with start time and therefore only contained 33 bits of data, I truncated the password to a few characters. The whole system instantly sped up.

  • Roman (unregistered)

    Ruby backend programmer and he doesn't know "Delayed Job"?

    Thats the REAL WTF

  • gnasher729 (unregistered) in reply to DocMonster

    I always thought the rule about performance was: Don't bother optimising it if you didn't measure it, because if you didn't measure it then it can't have been important. (And the other reason is: If you didn't measure it before and after, then you don't know if you improved it). Logical consequence: If it is so slow that your boss demands changes then you need measuring.

    And then there are two situations: It's slow and you expect it to be slow because it's a lot of work, or it is slow and you didn't expect it to be slow. The latter case happens a lot and it is usually not a matter of optimising, but a matter of removing stupidity (as in this case). You do things; you expect them to be done so many times and each time is expected to take so long A bit of profiling will usually tell you that one of your assumptions is badly wrong. And then you often know what to fix.

  • Andreas (google) in reply to Quite

    Well, you shouldn't even be able to commit to production. But rather to a "to-be-authorized"-branch that can be pushed to production after QS etc...

  • ratchetfreak (unregistered) in reply to Roman

    Unnecessary delayed job that takes time will still eat up CPU resources that the other processes/VMs could use.

  • isthisunique (unregistered) in reply to gnasher729

    Premature optimisation is sometimes mandatory or beneficial.

    Historically, certain cheap microoptimisations can make your program faster overall as long as they don't mean writing bizarre code. I've seen surprising results with this. Essentially things where if you know about it and do it every time then the cost is basically zero. Of course you do still need to actually confirm those approaches are faster. Depending on the language or system is may or may not make much difference. The problem is, a new version of a language can turn everything around on the performance front. These days it rarely matters as much because hardware is fast and if you really want micro-optimisations, you make a recompiler.

    You still need to avoid certain things though where doing it the simplest way up front can trigger a performance bug in the language. A performance bug is where you have atypical performance under certain circumstances. One language for example has dictionaries. When it comes to CRUD, in certain circumstances deletion becomes exponentially slower, especially if you do it in certain ways, one the the simplest under some interpretation VMs can exacerbate the problem to the point that everything runs at several million operations a second except delete which falls to four or five operations a second. Another classic example is the string buffer.

    You also need big O optimisation. If something is exponential or linear and you know things are going to be added to it more and more quite often then you have to see if you can optimise it.

  • dkf (nodebb) in reply to isthisunique

    The usual rule is “pay attention to the big-Os and let the little ones look after themselves”, at least as a first cut. If there's still problems after that, measure where they are before anything else.

Leave a comment on “Cache Congestion”

Log In or post as a guest

Replying to comment #:

« Return to Article