• (cs) in reply to RevMike
    RevMike:

    I'd guess that a lot of good developers have only written an algorithm like that once or twice professionally.


    A good developer would implement e.g. the hashtable algorithm at most once for each language (when there is no suitable library available).
  • (cs) in reply to RevMike
    RevMike:
    I'd guess that a lot of good developers have only written an algorithm like that once or twice professionally.


    The younger developers who grew up with C++ STL and/or Java have probably never done it at all.

    I know I haven't, though I did once (still at university) implement a B-tree. It's still the programming achievement I'm most proud of since that's a damn complex data structure with lots of edge cases, and I did it all by myself.

    In the real world of software development, you rarely get to build a self-contained thing on your own, it's mostly teamwork and maintenanche programming.
  • (cs)

    This is impossible to know what this code is doing without knowing the language it is written in. Please always specify the programming language. Assuming that is C++, this is pretty clear that the + operator (and maybe the ToString() method) has been redefined and contains all the business logic *-)

  • (cs) in reply to RevMike
    RevMike:

    A funny thing happened in the IT world.  In the 60s and 70s, lots of research went into making algorithms and processes as fast and efficient as possible.  In that period, hardware was expensive and developers were cheap, and so it was best to optimize for the best use of the expensive resource.In the 90s and this decade, the pendulum has swung.  Hardware is cheap and developers are expensive.  Having a developer spend time trying to make a reasonably fast process faster is usually fool-hardy.  And having a developer implement a custom sort routine when the library has one that is almost as fast and fully tested is lunacy.  Research, in reflection of this, now has moved onto developer productivity.


    True enough.  However we have not forgotten the research of our fore-fathers.   We make their work easy to use so that our modern programs are fast.

    If programmers used bubble sort (or worse bogo-sort), stored all data in singlely linked lists, and other such things, even with modern technology computers would be slow.    We can afford a little inefficacy because most programs are writing O(1) or O(n) code that glues together more complex algorithms (which are often O(n*ln(n))), but always the best algorithm for the job), written by the best programmers we have.

    As you pointed out, we use hash tables all the time, but it is a part of our library/language so we don't write them.   In many cases it is easier to use a O(1) hash table than a linked list, because both are implemented, but linked list doesn't implement a find so we have to iterate through it ourself.   This is strong encouragement to choose the fast data structure right from the start.
  • (cs) in reply to OneFactor

    thedailywtf.com is a church. We worship at the holy altar of bad code, by ridiculing it and those who create it.

Leave a comment on “Better Off TODO”

Log In or post as a guest

Replying to comment #51636:

« Return to Article