• Will Kirkby (github)

    requests/c418b58b-164d-4e1f-970b-ed00dea855b6/frist!

  • Steve (unregistered)

    TRWTF is that in 2 years she couldn't find the time to refactor the code herself.

  • (nodebb) in reply to Steve

    Or couldn't get approval for her solution.

    Maybe because it wasn't enterprisey enough.

    It was definitively not needed, obviously: with a dozen test files of a few kilobytes each, the system is sufficiently fast, so there's obviously absolutely no need to fix anything here.

  • Oliver Jones (google)

    Tee hee. Thanks guys. AWS is meeting their revenue goals from you, so my cloud rates stay low.

  • Mason Wheeler (unregistered)

    In those two years, no matter how many times Melinda nominated this particular block of code, it’s remained their lowest-priority piece of technical debt.

    Horrifying possibility that no one seems to have considered yet: that's because it objectively is! Maybe the codebase is riddled with even more horrible WTFs than this!

  • Anon-Anon (unregistered)

    I can see why it hasn't been tackled. It works, even though it's slow and expensive. And as long as the clients regularly change their request-ids, it's probably not a huge problem. On the other hand, if you replace it and screw up, you just became the guy who broke logging. Which people may not even notice until they go to check their logs and realize that the logs they were expecting aren't there.

    Replacing it has a high level of risk, with a relatively low reward. Worse, that reward isn't tied to the developer's personal interest. I assume they don't get paid more when the Amazon S3 bill is slightly lower for the year.

    I'm assuming that Melinda is a non-technical manager. Otherwise, why not just fix it herself. If so, if she really wants the devs to jump on this, incentivizing them to care about it would be a first step.

  • Appalled (unregistered) in reply to Anon-Anon

    Agree totally. If it works don't WTF with it.

    Slowness does not mean broken. I would guess they get a new request ID for each visit. I'll bet they also (already) put a disclaimer on the Web Site, "If you're doing multiple uploads and things are slowing down, please close all Browser sessions and return". That would take about 5 minutes (depending on mandated Deployment hoops to jump thru), is zero-risk, does NOT look terribly foolish (no need to explain WHY), and solves the problem,

  • guester (unregistered) in reply to Anon-Anon

    Or she's part of a team and they decide what to work on as a team. Somebody going solo to fix something a team like this decided as a group they weren't going to work on would not be well-liked, to put it lightly.

    There's also the undertone that the meetings and some of her co-workers themselves are slight WTFs. It isn't too much of a WTF that every single one of those are technical debts that affect only the developers, but they seem to have mastered the art of incrementally "fixing" things people don't like by using something new and shiny instead (rather than properly talking out why those things are bad/hated).

    Sure it's a risk to refactor it (all the more reason to delegate it across a team), but at the same time, at least she's suggesting a technical debt that wouldn't be at so much risk of being revisted over and over again and would lead to speed improvements for users. Possibly significant improvements, no way of knowing the "various reasons" after all.

  • isthisunique (unregistered)

    This is also a bad practice:

    if (res.statusCode >= 200 && res.statusCode < 300)

    It might seem clever and RFC compliant but in reality you should only code for the status responses you actually expect. There are some pretty weird HTTP codes for the 200 range.

  • Kagan MacTane (unregistered)

    I see nobody has yet noticed the use of recursion in the callback that gets passed to get() (which, by the way, is a crappy name for a function - what's the chance that's stomping on some other global function?). In findFreeKey(), we pass get() a callback that, in turn, calls findFreekey() in order to increment the startNum.

    So, if your key number is too high, you get a "Too much recursion" error. Since the calls go findFreeKey() -> get() callback -> findFreeKey() -> get() callback, I don't think it can be optimized as a tail call.

    They're just lucky this hasn't blown up in their faces yet.

  • One is the loneliest number (unregistered)

    So it's a company where everyone has enough free time to go through existing and code to vote for what they want to change? Where do I sign up?

  • The Coder (unregistered)

    So, there are still two WTFs here:

    1. Potential stack overflow due to recursion, although the language might optimize the tail-recursion.
    2. The race condition could result in lost log. This one sounds like a real issue.

    Maybe (1) never occured in practice due to small number of collisions and... (2) was never noticed.

  • djingis1 (unregistered)

    There is no such thing as tail recursion here. Yes, "findFreeKey" is the tail call of the anonymous function passed as callback to "get", but that's not tail recursion. For that to be tail recursion, the anonymous callback should call itself.

  • TimothyB (unregistered) in reply to guester

    Sounds like she hasn't developed much influence inside the team.

    Being right isn't sufficient, if no one is going to listen to me, it doesn't matter how good my technical chops are.

  • Ron Fox (google)

    If that's the lowest priority bit of technical debt -- I wonder what the highest looks like?

  • (nodebb)

    This doesn't even look like recursion to me, just async callbacks. It's a bit like using setTimeout as a poor man's setInterval.

  • Lucifer (unregistered)

    How about using a date-time stamp+ a 3 digit random as the ID?

Leave a comment on “The Keys to Cloud Storage”

Log In or post as a guest

Replying to comment #466704:

« Return to Article