• (cs)

    Hopefully he was an intern at the company when he first wrote that abomination...

  • whatever (unregistered)

    That's the problem with most WTFs. You can never tell if it's an act of utter incompetence, or malevolent genius.

  • Addison (unregistered)

    Even on purpose writing an app like that it a dick move.

  • ja (unregistered)

    I am inclined to believe he was a genius.

    captcha: feugiat

  • (cs) in reply to whatever

    I think its safe to assume this one was incompetance. Either 1) he lacked the level of skill that was required by his job or 2) he was lazy to a fault. Either I would classify as incompetant.

    as for malevolent genius that would be if he had siphoned off company data, for years, to sell to a competitor or something like that.

  • sjakie (unregistered)

    This guy is a genius!

    He is just using the Lazy Allocation of Memory Eventing coding guidelines as found in many well designed applications that follow the Rigorous Enumeration Threading And Resolvable De-allocation pattern. Duh!

  • FDude (unregistered)

    I once as an intern had to rewrite an app doing more or less the same thing. Took me 2 minutes to replicate the functionnality with rsync, and 2 more hours to script something able to use the already existing conf file ... The wheel exists already !

  • Tov Are (unregistered) in reply to galgorah

    This is not so uncommon. I translatet a piece of C code a few years ago to Java 1.1 and got a 99% speedup as a result (You'd expect it to go in the other direction). The original C code appeared as though it was heavily optimized, because the hashing function included lots of documentation, and cute shortcuts used throughtout. The culprit was the memory management and inner-loop allocations.

  • Incourced (unregistered)

    One the one hand is worth two in the bush.

  • SR (unregistered)

    Reminds me of merging a couple of batch files long, long ago. The new process took 10 - 20 minutes depending on the size of the data sets. Easily time for a quick ciggie. I'd often see the boss in there: "smoking again SR?" she'd ask - "my PC's hard at it" I'd reply.

    I'm glad the place closed down before we'd migrated off DOS 6.1 and onto that horroble multitasking Windows.

  • Hugh Brown (unregistered)

    So this code just returns the length of the file? You don't have to read any bytes of the file to know the file length -- no allocations/reallocations or file-reading necessary.

  • Nice try (unregistered) in reply to galgorah
    galgorah:
    as for malevolent genius that would be if he had siphoned off company data, for years, to sell to a competitor or something like that.

    Maybe he's also done that, but nobody believes it was him because this code here "proves" he's too stupid to pull it off. Now that would be true malevolent genius

  • Major Sir Jerry-Pending (unregistered) in reply to SR
    SR:
    Reminds me of merging a couple of batch files long, long ago. The new process took 10 - 20 minutes depending on the size of the data sets. Easily time for a quick ciggie. I'd often see the boss in there: "smoking again SR?" she'd ask - "my PC's hard at it" I'd reply.

    I'm glad the place closed down before we'd migrated off DOS 6.1 and onto that horroble multitasking Windows.

    http://xkcd.com/303/

    Enjoy

  • Nice try (unregistered) in reply to Tov Are
    Tov Are:
    This is not so uncommon. I translatet a piece of C code a few years ago to Java 1.1 and got a 99% speedup as a result (You'd expect it to go in the other direction)

    Only if you're gullible

  • Brompot (unregistered)

    TRWTF is that Rik V, having brought it down to under 20 seconds expended effort to further reduce it to 13 seconds. The payback time for that last adjustment must probably be measured in centuries.

  • (cs)

    I love the way that pBuffer is declared as an array of pointers, but actually used as an array of BYTE.

  • Anonymous (unregistered)

    Mmmm, delicious incompetance. This isn't a deliberate slow-down loop from a lazy genius - a true lazy genius blends into his environment; he moves like a stalking tiger, codes like the shadow of a swooping eagle and then disappears into the night, leaving nothing but a fake username against his check-in history. Most importantly of all, the lazu genius does not get fired - because management doesn't even know he works here.

  • Swa (unregistered) in reply to FDude
    FDude:
    I once as an intern had to rewrite an app doing more or less the same thing. Took me 2 minutes to replicate the functionnality with rsync, and 2 more hours to script something able to use the already existing conf file ... The wheel exists already !
    And for those who aren't into rsync, even Mickeysoft's Robocopy does this rather well.

    Captcha: vulputate, from the dutch vulpatat, meaning stuffed potatoe.

  • (cs)
    long ReadBinaryFile(CString strFile, BYTE** pResult)
    {
        CFile file;
        if(!file.Open(strFile, CFile::modeRead)) return 0;
        pResult = new BYTE[file.GetLength()];
        if(!pResult) return 0;
        return file.Read(pBuffer, file.GetLength());
    }
  • rewind (unregistered) in reply to Brompot
    Brompot:
    TRWTF is that Rik V, having brought it down to under 20 seconds expended effort to further reduce it to 13 seconds. The payback time for that last adjustment must probably be measured in centuries.

    The company may actually realize the time savings here within a couple years if he spent less than 2 hours on it. Here comes that big promotion.

    Also, anyone at this company ever here of job scheduling?

  • Mike D. (unregistered) in reply to Brompot
    Brompot:
    TRWTF is that Rik V, having brought it down to under 20 seconds expended effort to further reduce it to 13 seconds. The payback time for that last adjustment must probably be measured in centuries.
    Assuming that he learned nothing from the exercise, and never reused the code in anything that executed more often, that would be true.

    Of course, this also assumes that humans are as time-efficient as machines, i.e. that five minutes spent on one thing are equal to five minutes spent on something else. I've never mastered that level of self-management-fu. If I have five minutes, I can get five minutes of work (or optimizations or whatever) done on whatever I'm doing, or I can spend five minutes task-switching to something else (usually takes me longer than that to remember everything about where I left off) with no measurable, visible progress, i.e. I just wasted 5 minutes (and remember that achievement is 70% perception and 30% effort, and task-switching is alwayus perceived as slacking).

    A few years ago, I gave up trying to optimize my life like that and decided to try to minimize task switches. It's a lot simpler and less stressful, and if they want to fire me, they'll find a "reason" no matter how "efficient" I am.

  • (cs) in reply to GettinSadda
    GettinSadda:
    long ReadBinaryFile(CString strFile, BYTE** pResult)
    {
        CFile file;
        if(!file.Open(strFile, CFile::modeRead)) return 0;
        pResult = new BYTE[file.GetLength()];
        if(!pResult) return 0;
        return file.Read(pBuffer, file.GetLength());
    }
    Nice try; it even comes close to being able to compile and work. But... You need to use *pResult instead of pResult in there a couple of times so that you're working with a buffer whose address is stored in the variable pointed to by pResult. (I prefer to keep the buffer location in a local variable until I'm sure that the function is going to succeed, at which point I update the referenced variable. The idea being that failures limit the amount of damage that occurs. But then I also write in C and not C++ for preference.)
  • Steve the Cynic (unregistered) in reply to GettinSadda
    GettinSadda:
    long ReadBinaryFile(CString strFile, BYTE** pResult)
    {
        CFile file;
        if(!file.Open(strFile, CFile::modeRead)) return 0;
        pResult = new BYTE[file.GetLength()];
        if(!pResult) return 0;
        return file.Read(pBuffer, file.GetLength());
    }

    Minor errors: should be *pResult = new ..., if(!*pResult) return..., and file.Read(*pResult,...).

    delenit: de-len it: remove its length.

  • Anonymous (unregistered) in reply to Mike D.
    Mike D.:
    Brompot:
    TRWTF is that Rik V, having brought it down to under 20 seconds expended effort to further reduce it to 13 seconds. The payback time for that last adjustment must probably be measured in centuries.
    Assuming that he learned nothing from the exercise, and never reused the code in anything that executed more often, that would be true.

    Of course, this also assumes that humans are as time-efficient as machines, i.e. that five minutes spent on one thing are equal to five minutes spent on something else. I've never mastered that level of self-management-fu. If I have five minutes, I can get five minutes of work (or optimizations or whatever) done on whatever I'm doing, or I can spend five minutes task-switching to something else (usually takes me longer than that to remember everything about where I left off) with no measurable, visible progress, i.e. I just wasted 5 minutes (and remember that achievement is 70% perception and 30% effort, and task-switching is alwayus perceived as slacking).

    A few years ago, I gave up trying to optimize my life like that and decided to try to minimize task switches. It's a lot simpler and less stressful, and if they want to fire me, they'll find a "reason" no matter how "efficient" I am.

    Sounds like you think about it way too much - all that time spent optimising your life is surely counter-productive to the task of actually living it. On a related note, can you pass the turing test?

  • (cs) in reply to Steve the Cynic
    Steve the Cynic:
    GettinSadda:
    long ReadBinaryFile(CString strFile, BYTE** pResult)
    {
        CFile file;
        if(!file.Open(strFile, CFile::modeRead)) return 0;
        pResult = new BYTE[file.GetLength()];
        if(!pResult) return 0;
        return file.Read(pBuffer, file.GetLength());
    }
    Minor errors: should be *pResult = new ..., if(!*pResult) return..., and file.Read(*pResult,...).
    Yeah - that's what happens when I write code quickly while waiting on hold on a phone call!
  • (cs) in reply to Mike D.
    Mike D.:
    A few years ago, I gave up trying to optimize my life like that and decided to try to minimize task switches. It's a lot simpler and less stressful.
    http://www.joelonsoftware.com/articles/fog0000000022.html
  • Your Name (unregistered)

    TRWTF is using Hungarian notation with verbose variable names in a 34 line function.

  • (cs)

    This almost screams the need for a speedup loop.

  • (cs)

    At least the original author optimized the code. Originally, the code seeked back to the start of the file, but then he optimized it so he just memcpy'd the results from the previous loop iteration.

  • Anonymously Yours (unregistered)

    You should have just trimmed a couple zeros off the speed-up loop.

  • Da man (unregistered) in reply to sjakie

    I opt for genius.

    Anybody who can create a good enough excuse to peacefully read his newspaper during office time can't be anything else.

  • (cs)
    FDude:
    I once as an intern had to rewrite an app doing more or less the same thing. Took me 2 minutes to replicate the functionnality with rsync, and 2 more hours to script something able to use the already existing conf file ... The wheel exists already !
    ^ This.
  • My Name? (unregistered)
    F:
    f!

    class first { char* buf; void free_buf() { for(int i=0;i<=1024;++i) buf[i]=0; }

    public: first() { buf = new char[256]; int i=0; while(FIRST[i]) buf[i]=FIRST[i]; } ~first() { delete buf; } ostream& output(ostream& s) { cout<<buf<<"\n"; } }; const char* first::FIRST="First!";

    int main() { first F; ofstream write("/tmp/first.txt"); F.output(write); write.close(); return system("cat /tmp/first.txt"); } // acsi

  • (cs) in reply to Steve the Cynic
    Steve the Cynic:
    GettinSadda:
    long ReadBinaryFile(CString strFile, BYTE** pResult)
    {
        CFile file;
        if(!file.Open(strFile, CFile::modeRead)) return 0;
        pResult = new BYTE[file.GetLength()];
        if(!pResult) return 0;
        return file.Read(pBuffer, file.GetLength());
    }

    Minor errors: should be *pResult = new ..., if(!*pResult) return..., and file.Read(*pResult,...).

    Actually there is no need to check for NULL, unless someone has incorrectly overloaded operator new.

  • (cs)

    I've seen worse/better improvements. I've been enlisted as an intern in a hospital, where I was responsible for writing a search function for a website in ASP classic. Now usually that would be a question of doing a full text search on a couple of tables, but in this case it involved searching a load of HTML and text files that the managing doctor had supplied, of course placed in a real developmestruction environment.

    Before I started, another intern had spent weeks on developing a search function and was finally done. Proud, she showed the manager how well it worked on her development machine, which it did. With a handful of pages. When it was deployed to production, the search would simply time out and show nothing.

    Guess what: she did a live search on the file system for every search. A system with roughly 1500-2000 MS Frontpage 2000 generated HTML files. Just for the fun of it I decided to allow the thing to run its full course. I disable the timeout and started up a random search in the morning. When I was going home 8 hours later, it was still running. It finished right after I got back to work the day after, meaning a simple search for the word "test" took just over 24 hours.

    I managed to scrape something off of that time. I indexed the full web site into a database and periodically ordered a reindex through a scheduled task. Since the reindex would only do anything for new or changed files, it would be done in seconds. And the actual search query? It sped up from 24 hours and a few minutes to about .4 seconds. I'd call that an improvement.

  • (cs) in reply to Brompot
    Brompot:
    TRWTF is that Rik V, having brought it down to under 20 seconds expended effort to further reduce it to 13 seconds. The payback time for that last adjustment must probably be measured in centuries.
    I was going to post precisely this. As a manager, I obviously wouldn't be that happy about a developer that wrote such poor code that it took forty minutes, but I also wouldn't be hugely happier about a developer who, having shaved 2380 seconds off that, spends significant further time taking that up to 2387.
    Mike D.:
    Of course, this also assumes that humans are as time-efficient as machines, i.e. that five minutes spent on one thing are equal to five minutes spent on something else. I've never mastered that level of self-management-fu. If I have five minutes, I can get five minutes of work (or optimizations or whatever) done on whatever I'm doing, or I can spend five minutes task-switching to something else.
    If getting it down to 13 seconds really was five minutes' work, and he got it down to 20 seconds five minutes before a meeting or something, then sure, spend those further five minutes tweaking it further rather than waste them entirely. If, as I more suspect, he finished this then went onto something else, those five minutes of task switching were required either way, and the five minutes of optimizations were simply a waste. And I suspect we're talking more like an hour anyway!
  • (cs) in reply to fist-poster
    fist-poster:
    Steve the Cynic:
    GettinSadda:
    long ReadBinaryFile(CString strFile, BYTE** pResult)
    {
        CFile file;
        if(!file.Open(strFile, CFile::modeRead)) return 0;
        pResult = new BYTE[file.GetLength()];
        if(!pResult) return 0;
        return file.Read(pBuffer, file.GetLength());
    }
    Minor errors: should be *pResult = new ..., if(!*pResult) return..., and file.Read(*pResult,...).
    Actually there is no need to check for NULL, unless someone has incorrectly overloaded operator new.
    Some compilers have the ability to turn off the exception thrown by operator new (and yes I am used to working in this b*stardized state), or you can just use new(std::nothrow)
  • (cs)

    Or you could just run diff -r -q to get a list of filenames to copy.

    Then, depending on your OS and available libraries, you either use the library-provided function to copy files, or you shell out and run cp.

  • Bim Job (unregistered)

    This is one of the very, very few cases where I would opt for rewriting a C application, from scratch, in PHP.

    Actually, even a rewrite in Assembler using INT 21h would be infinitely preferable.

  • highphilosopher (unregistered) in reply to My Name?
    My Name?:
    F:
    f!

    class first { char* buf; void free_buf() { for(int i=0;i<=1024;++i) buf[i]=0; }

    public: first() { buf = new char[256]; int i=0; while(FIRST[i]) buf[i]=FIRST[i]; } ~first() { delete buf; } ostream& output(ostream& s) { cout<<buf<<"\n"; } }; const char* first::FIRST="First!";

    int main() { first F; ofstream write("/tmp/first.txt"); F.output(write); write.close(); return system("cat /tmp/first.txt"); } // acsi

    Why did you write this? I really want to know why. Of all the code snippets that get dropped on this site, what stimulus can you blame this response on?

    Honestly this worries me alot. I think TRWTF that happens to me on a daily basis is people who overcode things. It seems to be a rampant problem. I'm not sure if it's the programmer version of Dick measuring or not, but either way it's #$%@#$% annoying.

  • F (unregistered) in reply to highphilosopher

    what's the non-programmer version of "Dick measuring"? and why is Dick proper?

  • PJ Volk (unregistered)

    Going meta is a useful skill. I don't care how good your Java or C++ skills are. I know you're an excellent programmer, but do you really need to write a program for this? And then there's the defensive meeting where the programmer says how efficient it will be, that we need the speed of (insert language here) to handle this effectively. The 800-pound gorilla in the room is of course (and the programmer gets the strawman out... Like, well mine will be more efficient because it's threaded), the OS already does it. When your only tool is a hammer, every problem looks like a nail.

  • Anonymous (unregistered) in reply to highphilosopher
    highphilosopher:
    My Name?:
    F:
    f!
    class first <snipped some code>

    Why did you write this? I really want to know why. Of all the code snippets that get dropped on this site, what stimulus can you blame this response on?

    Honestly this worries me alot. I think TRWTF that happens to me on a daily basis is people who overcode things. It seems to be a rampant problem. I'm not sure if it's the programmer version of Dick measuring or not, but either way it's #$%@#$% annoying.

    It made perfect sense to me. I'm not saying it was hugely funny or clever, but I see exactly what he was doing (it's a glorified "frist" joke, FYI). What exactly don't you understand? And who the hell is Dick? You're not one of those freaks that likes to name his junk are you? You should have called it "lowphilosopher", that would have been so much better than "Dick".
  • Patrick (unregistered)

    Sorry, I'm still reading the article. I'll post a comment later. returns to digg

    captcha: minim[al amount of work]

  • (cs) in reply to Hugh Brown
    Hugh Brown:
    So this code just returns the length of the file? You don't have to read any bytes of the file to know the file length -- no allocations/reallocations or file-reading necessary.
    Bellinghman:
    I love the way that pBuffer is declared as an array of pointers, but actually used as an array of BYTE.
    pBuffer is a pointer to a pointer - standard C/C++ technique for returning extra stuff (particularly that created with new or malloc) from a function. BYTE ** doesn't necessarily imply that the first pointer points to an array. Usage would be something like
    BYTE * fileData;
    int length = ReadBinaryFile("file.txt", &fileData);
    
    At which point *fileData is the contents of the file.
  • Plz Send Me The Code (unregistered) in reply to ThePants999

    You guys don't get all obsessive compulsive over this sort of thing? What sort of programmers are you?

  • gus (unregistered)

    Some more WTF's in there:

    (1) it does not check for success on the read.

    (2) It assumes that the file will fit in RAM. If it requires virtual memory, that's another couple decades of slowdown.

    (3) It assumes the allocations are all going to work.

    (4) It reads 255 bytes, which is not a block size on any kind of disk. 255 out of 256 reads are going to be skewed and will require the kernel to copy buffers.

  • tim (unregistered)

    I would have just increased the size of the buffer from 256 bytes to 10MB (or whatever depending on the typical size of the files you're reading)

    99% of the speedup with 1% of the effort. no need to unit test as you haven't changed the logic

  • Outtascope (unregistered) in reply to F
    F:
    what's the non-programmer version of "Dick measuring"?

    Posting overly complicated VB source. And no one likes an improper Dick.

  • (cs)

    Barring memory-constrain environments, reading the content of a file to a 256b sized buffer (specially when it is to scan multiple, potentially large files), that's the give-away to something: That something can be qualified as either sophomorish genius or inexcusable professional wtflocaust.

Leave a comment on “Reading Comprehension”

Log In or post as a guest

Replying to comment #:

« Return to Article