• Zunesize Me! (unregistered) in reply to josefx
    josefx:
    AFAIK there are two reasons to believe this.
    1. He mixed up socket and file handling, at least windows "locks" sockets for some time after they are closed to avoid stray packages.

    2. They use some broken maleware scanner which would check the files after they where changed (blocking access while doing so), leading to an error if the program tried to access the file again at the same time.

    MMMMmmmmmm... maleware! How does a twink like me find some of that?!

    captcha: genitus - OOOOohhhhhhhh... Yeeeeeaaaaaaahhhhhh...

  • PiisAWheeL (cs) in reply to McKay
    McKay:
    Nagesh:
    Department of More Money:
    At a previous job we had a dial setting that we had on the framework that we could set from 0 to 1000. As a client got further and further behind on their service bills we recieved change requests to the effect of "Increase IncompeTech's dis-service factor by 10 points" The disservice factor represented how many miliseconds after significant portions of code (Database query, Input Processing, Page rendering) we waited before moving on to the next step. This might not have been bad for some applications, but on some pages you were waiting upwards of 20 seconds on dissatisfaction time to get your page.

    Needless to say, no client ever hit the magic 1000 (We usually discontinued service before that point)

    How dare you admit to write code that is being immoral? http://www.gammadyne.com/ethics.htm

    You're next job will be here. { image }

    No, it's not a violation of a code of ethics to decrease performance for non-paying customers. They're not trying to be awesome by improving performance later, they're punishing customers who haven't been paying the bill. They would be fully ethical in turning off the service all together. Just making it slow is being nice.

    I think a better solution would be to prioritize thread priority to threads for paying customers' sites first, and then if there is any leftover cpu, then you can dump it on the non paying customers... If you are feeling generous enough to leave their service on.

  • Boris (unregistered)

    We van hardly stand the wait, please Christmas don't be late...

    Wait, what was the article about again?

  • Boris (unregistered) in reply to Boris
    Boris:
    We *can* hardly stand the wait, please Christmas don't be late...

    Wait, what was the article about again?

    Sorry, typo

  • Paul W. Homer (unregistered)

    The sleeps might actually be important. On Perl ports to NTFS (ActiveState for instance), in some small, but not all that rare cases, calls to the file IO return before the operation is fully completed (this actually occurs in Java as well). So if you write a Perl function to recursively delete a very (very) large directory structure with lots of sub-directories and files, the code will fail pretty reliably (if it doesn't just use more large files and deeper directories). Of course, this doesn't happen for Unix systems :-)

    Paul.

  • J-L (unregistered)

    Like Alvin in the story, I knew a guy who would put

    sleep(1)
    after
    unlink()
    calls in C++ and Perl code, because he didn't want the next line of code to execute until the file being unlinked was guaranteed to be deleted.

    He even put this sleep(1) statement in tight loops, causing the programs to process 3,600 files in about an hour (when, without it, the programs would process the same amount in about a minute).

    Ironically, this was the same person who would change lines of code like

    n++;
    to
    ++n;
    because it would make the programs run "blindingly fast" in tight loops. (A claim which, through benchmarking, I've never been able to prove.) It's as if that he never stopped to think that sleeping for one second in a tight loop would bog your program enough to make it look like it was unresponsive.

  • PiisAWheeL (cs) in reply to Boris
    Boris:
    Boris:
    We *can* hardly stand the wait, please Christmas don't be late...

    Wait, what was the article about again?

    Sorry, typo

    I know right... the keys are like, right next to each other.

  • J-L (unregistered)

    About the phrase "The only right way to write a Perl program is whatever way works":

    As a programmer who's been using Perl for almost ten years now, I have to say that I see Perl as two different languages: One with "use strict;" and "use warnings;" and one without.

    And I can tell you, Perl programming without "use strict;" and "use warnings;" is terrible... the reduced error checking makes bugs abound. But programming with "use strict;" and "use warnings;" is a pleasure, in that once you fix your typos (that you would have to anyway with a good compiler) you encounter virtually no unpleasant surprises.

    From debugging maintaining other people's Perl code, it feels like code that uses "use strict;" and "use warnings;" is at least ten times better (quality-wise) than code that doesn't.

    So if you don't like Perl and you've never used "use strict;" and "use warnings;" please re-learn Perl with those pragmas. Your Perl code will only get better from there.

  • Nagesh (cs) in reply to pjt33
    pjt33:
    What's a fire handle?

    A place where fire brigade work

  • DCRoss (cs)

    I'm sure I have read this article before, only in reverse.

    http://thedailywtf.com/Articles/The-Speedup-Loop.aspx

    Sounds like Alvin was having a good week and didn't want to throw away his insurance.

  • Maurits (cs)

    Of all languages, Perl provides the most direct window into the programmer's way of thinking.

    Thus, it has a good reputation for being very easy to write.

    It also has a reputation for being very difficult to read. This is not Perl's fault.

  • Miroslav (unregistered) in reply to Earl Colby Pottinger

    Well, hard drive is a physical device. Those 16 Mb will not just magically appear on the drive. Read/write head must have time to record every single byte. HD memory cache will help, but if smaller than your total file size(s)...

    I'm actually somewhat surprised that you don't need longer than 1 ms between writes. I guess HDs are a lot faster nowadays.

  • that guy (unregistered) in reply to J-L
    Ironically, this was the same person who would change lines of code like n++; to ++n; because it would make the programs run "blindingly fast" in tight loops. (A claim which, through benchmarking, I've never been able to prove.)

    There's a tiny nugget of truth here, but not a lot. With a C++ object using operator overloading (such as an STL iterator), it is slightly faster to use preincrement where possible, because postincrement requires the object to be copied so that the value-before-increment can be returned. Realistically, a good optimizing compiler can often optimize the difference away if the result is not actually used; further, objects that can sensibly be postincremented tend to be small, so a single copy is usually not a prohibitive cost. But newbies to STL are often taught to prefer preincrement to postincrement, and occasionally they are led to believe that there is a massive performance gain involved.

  • Dave (unregistered) in reply to SomeCoder
    SomeCoder:
    Nagesh:
    Read #6.

    Higher quality is reason India is ading job and US is losing them.

    Hahahahahahaha!

    As someone who deals with outsourcing in India on a daily basis, this is hilarious; thanks for the laugh :) The reason India is adding jobs and the US is losing them isn't because of quality. It is happening because you pay literally pennies on the dollar for Indian engineers to crap out some code that is brittle, hard to use, and only sort of works.

    Indian engineers that know what they are doing come to America where they can make actual money. I've worked with lots of intelligent people like this. But I've never worked with any outsourced "talent" that could pass Programming 101. The old saying "you get what you pay for" is especially true in this case.

    LOL, another newcomer successfully trolled.

  • Jerry (unregistered) in reply to Department of More Money
    Department of More Money:
    At a previous job we had a dial setting that we had on the framework that we could set from 0 to 1000. As a client got further and further behind on their service bills we recieved change requests to the effect of "Increase IncompeTech's dis-service factor by 10 points" The disservice factor represented how many miliseconds after significant portions of code (Database query, Input Processing, Page rendering) we waited before moving on to the next step. This might not have been bad for some applications, but on some pages you were waiting upwards of 20 seconds on dissatisfaction time to get your page.

    Needless to say, no client ever hit the magic 1000 (We usually discontinued service before that point)

    I worked for a software company that was not quite that subtle. We would sell clients our monstrous code base along with a box to run it on. Then they had to pay monthly maintenance too. If they fell behind, a logic bomb would go off and from that point on anyone who tried to use any of the software features was immediately logged off.

    The company spent $BIGNUM developer hours making sure the bomb module was called from just about everywhere, so no matter what you tried to do, you'd get trapped. (The bomb code, and its calling sequence, were duly obfuscated since we gave the customers source code.) The accountants salivated at the now guaranteed perpetual revenue stream.

    The user experience was pretty blatant. You'd come in one morning and everybody was getting logged off. Since the client's entire company would be forced to a standstill, this typically got excellent levels of management attention. Although you might not quickly realize why it was happening, what was happening was abundantly clear.

    Now this wasn't a Windows box. Whenever anyone wanted to log out, they did so by running a particular command. So the fix was to rename the logoff command. Clients usually figured this out in about 3 minutes, at which point their system returned to full functionality and their management was royally pissed at the vendor, us. Pissed enough to ensure we never got another penny from that client.

    The accountants, previously salivating at the "guaranteed" perpetual revenue stream, found their bodily fluids were still leaking, but this time, said fluids were tears.

  • Mike (unregistered) in reply to J-L
    J-L:
    It's as if that he never stopped to think that sleeping for one second in a tight loop would bog your program enough to make it unresponsive.
    FTFY.

    Captcha: augue: It's haud to augue with an aumed robber.

  • keiranhalcyon31 (unregistered) in reply to Maurits
    Maurits:
    Of all languages, Perl provides the most direct window into the programmer's way of thinking.

    Thus, it has a good reputation for being very easy to write.

    It also has a reputation for being very difficult to read. This is not Perl's fault.

    That's why it's called a write-only language.

  • Nagesh (cs) in reply to Severity One
    Severity One:
    Looks like Alvin substituted his own lack of sleep with Perl code...

    Once you enter realm of programing, you rarely get to sleep.

  • snoofle (cs)

    RE: logic bombs...

    Progressive slowdown sleeps can work, but even when obfuscated, folks can usually figure out what they do with a debugger. If you're ever caught, you'll have one angry soon-to-be-former-customer. It's not so much losing the non-paying customer as the bad press they could generate on the internet that can come back at you.

    A better sort of "bomb" might be along the lines of what AV software does. Simply pop up a dialog that says: Your support for <product> has lapsed on <date>. The <paid portion> functions of this program will/have cease/d to function on <end-of-grace-period-date>. Please contact billing at <phone number> to renew your service. And then have the program just return to the main menu. If you put that as the first thing in every function, they'll be annoyed, at themselves, for having gotten caught, and there's a pretty good chance they'll pay the bill.

    If they don't pay up, then they weren't going to renew anyway, so locking them out won't piss them off too much as they know it's their own fault.

  • Nagesh (unregistered) in reply to SomeCoder
    Comment held for moderation.
  • Tom (unregistered) in reply to onitake

    Among other things, I have to create output forms in Español, so I use the ES layout or the US-International layout (depending on which VM I happen to be working in.)

    Drives people nuts when they try to use my computer.

  • Nickster (unregistered) in reply to Nagesh
    How dare you admit to write code that is being immoral?

    Darn tootin.' There's your lesson, kids: never ever admit anything.

  • J-L (unregistered) in reply to that guy
    that guy:
    Ironically, this was the same person who would change lines of code like n++; to ++n; because it would make the programs run "blindingly fast" in tight loops. (A claim which, through benchmarking, I've never been able to prove.)

    There's a tiny nugget of truth here, but not a lot. With a C++ object using operator overloading (such as an STL iterator), it is slightly faster to use preincrement where possible, because postincrement requires the object to be copied so that the value-before-increment can be returned. Realistically, a good optimizing compiler can often optimize the difference away if the result is not actually used; further, objects that can sensibly be postincremented tend to be small, so a single copy is usually not a prohibitive cost. But newbies to STL are often taught to prefer preincrement to postincrement, and occasionally they are led to believe that there is a massive performance gain involved.

    I might concede that there might be a tiny nugget of truth there, but only when the '++' operator is overloaded.

    But with C and C++'s plain-old-data-types (int, char, short, etc.) there is no performance difference, at least not that I've ever been able to uncover. People intuitively think there is a huge performance difference, but that's only because they would implement post-increment with a copy, if the native code were left up to them.

    But down at the low level, it is absolutely possible to implement post-increment and post-decrement without creating a copy, and compilers are aware of this.

  • Gurth (cs) in reply to pjt33
    pjt33:
    What's a fire handle?
    [image]
  • Socio (unregistered) in reply to Nagesh
    Comment held for moderation.
  • Born Texas Proud (unregistered) in reply to pjt33
    pjt33:
    What's a fire handle?
    It's what a chinese programmer calls a file handle.
  • J-L (unregistered) in reply to Maurits
    Maurits:
    Of all languages, Perl provides the most direct window into the programmer's way of thinking.

    Thus, it has a good reputation for being very easy to write.

    It also has a reputation for being very difficult to read. This is not Perl's fault.

    Well said, Maurits!

    I've seen it where people care about writing good Perl code, and where people don't. Guess which Perl code was better written...

  • np (unregistered)

    If they are worried about flushing of the file, they can set $|=1; and it will auto-flush. I'm sure the performance impact is better than sleeping 5 or 6 seconds.

  • LK (unregistered) in reply to Rootbeer
    Rootbeer:
    bcs:
    s/sleep([0-9]*)/sync()/ ??

    You need to escape your parentheses.

    Or maybe you did, and the comment system ate it.

    Well, he would need to escape them if he was using perl to do the replacement in his perl script. I would have used sed, and in BREs (Basic Regular Expressions) parentheses are not special. Except in GNU BREs, but then you must escape them for them to be special!

    Capcha: enim. Perl and Latin have things in common . . .

  • BR (unregistered) in reply to bcs
    bcs:
    s/sleep([0-9]*)/sync()/ ??

    You don't need the sync() either. Calling close() on an open filehandle closes that handle, flushes the I/O buffers and closes the system file descriptor. It'll return false if any of those don't happen. Though I've never seen anyone actually check the return value of close(), you could if you wanted to be sure everything was flushed and closed properly.

  • geoffrey, MCP, PMP (unregistered)

    Alvin, the senior developer, probably had a reason for going back and fixing Dave's work. Really mature on Dave's part -- instead of figuring out why it was in fact a stupid question he'd just asked, he goes running to this site with it.

  • Fred (unregistered) in reply to np
    np:
    If they are worried about flushing of the file, they can set $|=1; and it will auto-flush. I'm sure the performance impact is better than sleeping 5 or 6 seconds.
    That, ladies and gentlemen, is an example of intuitive Perl code. As in, you can take your intuitive systems and shove them up your /dev/null.
  • Ryo Chonan (unregistered) in reply to Born Texas Proud
    Born Texas Proud:
    pjt33:
    What's a fire handle?
    It's what a chinese programmer calls a file handle.
    I think the stereotype you're going for there is Japanese programmers. Chinese people have no problem pronouncing l's and r's. Ever heard of lo mein? How about ribs?
  • Matt Westwood (cs) in reply to SomeCoder
    SomeCoder:
    Nagesh:
    Read #6.

    Higher quality is reason India is ading job and US is losing them.

    Hahahahahahaha!

    As someone who deals with outsourcing in India on a daily basis, this is hilarious; thanks for the laugh :) The reason India is adding jobs and the US is losing them isn't because of quality. It is happening because you pay literally pennies on the dollar for Indian engineers to crap out some code that is brittle, hard to use, and only sort of works.

    Indian engineers that know what they are doing come to America where they can make actual money. I've worked with lots of intelligent people like this. But I've never worked with any outsourced "talent" that could pass Programming 101. The old saying "you get what you pay for" is especially true in this case.

    I've been working with some outsourced talent which is fucking mustard mate, you must be a racist arsehole who doesn't know fucking shit.

  • BR (unregistered) in reply to Fred
    Fred:
    np:
    If they are worried about flushing of the file, they can set $|=1; and it will auto-flush. I'm sure the performance impact is better than sleeping 5 or 6 seconds.
    That, ladies and gentlemen, is an example of intuitive Perl code. As in, you can take your intuitive systems and shove them up your /dev/null.

    People get used to seeing stuff like '$|++;' up at the top of a script. There are shorthand built-in variables for a lot of things, and turning on autoflush is a pretty common one. Not as widely used as $! or $_ or whatever, but you see it every so often.

    If it's a problem, then just say in your style guidelines to use the 'English' pragma and then the long-form variables (in this case $OUTPUT_AUTOFLUSH rather than $|). All the short built-ins have longer, more descriptive names.

    It's like with anything: see it long enough and it's eventually no longer new and confusing. I haven't looked at the perlvar manpage for years and years. The number of built-ins that get used often I can count on one hand.

    Though I tend ride my guys pretty hard, even on throw-aways. It takes care to write good, maintainable code, and it's a good habit to get into even with shorter scripts. I think that the shorter built-ins are generally fine, unless it's a seldom used one. I'd ask them to use $INPUT_LINE_NUMBER rather than $. for example.

  • Matt Westwood (cs) in reply to J-L
    J-L:
    About the phrase "The only right way to write a Perl program is whatever way works":

    As a programmer who's been using Perl for almost ten years now, I have to say that I see Perl as two different languages: One with "use strict;" and "use warnings;" and one without.

    And I can tell you, Perl programming without "use strict;" and "use warnings;" is terrible... the reduced error checking makes bugs abound. But programming with "use strict;" and "use warnings;" is a pleasure, in that once you fix your typos (that you would have to anyway with a good compiler) you encounter virtually no unpleasant surprises.

    From debugging maintaining other people's Perl code, it feels like code that uses "use strict;" and "use warnings;" is at least ten times better (quality-wise) than code that doesn't.

    So if you don't like Perl and you've never used "use strict;" and "use warnings;" please re-learn Perl with those pragmas. Your Perl code will only get better from there.

    I had cause to perform some specialised munging of a file for a specific task recently. I'd heard about Perl so I downloaded it, googled around a bit for learning resources and started work. Imagine my delight when after about a total of about 10 hours work I had performed said munging task and it worked.

    As I had taken time to study the online help pages, I had encountered the advice to use strict; and use warnings; and indeed, bugs were found at compile time (or whatever it called itself when I ran the bloody thing).

    The moral of this story is: Perl's great. It's easy to learn and write stuff in, for certain specialised tasks. This little program sits there in an Ant script, between a QF-Test script and a selenium script. Enter the appropriate Ant command and hey presto! instant regression test. Couldn't have achieved that with any kind of consistency without the delights of Perl.

  • Matt Westwood (cs) in reply to BR
    BR:
    Though I tend ride my guys pretty hard, even on throw-aways. It takes care to write good, maintainable code, and it's a good habit to get into even with shorter scripts. I think that the shorter built-ins are generally fine, unless it's a seldom used one. I'd ask them to use $INPUT_LINE_NUMBER rather than $. for example.

    Same with mine. There's no such thing as throwaway code. There's always someone going to come up to you with: have you still got the program to do yadayada ...? and because it's been written adequately, and commented well enough, tweaking it for the second case is straightforward to do, and to extend it into an official developer tool is no problem.

  • PiisAWheeL (cs) in reply to Tom
    Tom:
    Among other things, I have to create output forms in Español, so I use the ES layout or the US-International layout (depending on which VM I happen to be working in.)

    Drives people nuts when they try to use my computer.

    You should try dvorak. Nobody uses my computer for SHIT!

  • BR (unregistered) in reply to Matt Westwood
    Matt Westwood:
    As I had taken time to study the online help pages, I had encountered the advice to use strict; and use warnings; and indeed, bugs were found at compile time (or whatever it called itself when I ran the bloody thing).

    The moral of this story is: Perl's great. It's easy to learn and write stuff in, for certain specialised tasks. This little program sits there in an Ant script, between a QF-Test script and a selenium script. Enter the appropriate Ant command and hey presto! instant regression test. Couldn't have achieved that with any kind of consistency without the delights of Perl.

    I also find myself going through svn or old scratch folders looking for old code of some variety. (The guy who was here before me used the file system as his version control system, and his coding was as terrible as that scheme.) Bad perl is really, really bad. I threw a bunch of his away, actually. It would have been nice to re-use that code, but it took longer to get it into shape than to rewrite it. Good perl is harder to achieve than with many (most?) other languages, but definitely possible. You just have to get into good habits early on and keep with it.

    Most definitely use strict and warnings. It's weakly-typed, so it's far too easy to try and add something to a string (which, in fact, perl will happily do for you and unless you ask for warnings it won't complain a bit). Also try 'use diagnostics;' sometime. You'll get very detailed error messages.

    BTW, I'm going to work the phrase "fucking mustard mate" into conversation at least a couple times a week.

  • jverd (unregistered) in reply to geoffrey, MCP, PMP
    geoffrey:
    Alvin, the senior developer, probably had a reason for going back and fixing Dave's work. Really mature on Dave's part -- instead of figuring out why it was in fact a stupid question he'd just asked, he goes running to this site with it.

    Oh, come on, you can do better than that! Even Nagesh's trolls aren't that lame!

  • jverd (unregistered) in reply to Matt Westwood
    Matt Westwood:
    SomeCoder:
    Nagesh:
    Read #6.

    Higher quality is reason India is ading job and US is losing them.

    Hahahahahahaha!

    As someone who deals with outsourcing in India on a daily basis, this is hilarious; thanks for the laugh :) The reason India is adding jobs and the US is losing them isn't because of quality. It is happening because you pay literally pennies on the dollar for Indian engineers to crap out some code that is brittle, hard to use, and only sort of works.

    Indian engineers that know what they are doing come to America where they can make actual money. I've worked with lots of intelligent people like this. But I've never worked with any outsourced "talent" that could pass Programming 101. The old saying "you get what you pay for" is especially true in this case.

    I've been working with some outsourced talent which is fucking mustard mate, you must be a racist arsehole who doesn't know fucking shit.

    And you're a fucking idiot if you think that related his negative business and technical experiences makes him racist.

    Actually, I'm pretty sure you're a fucking idiot anyway.

  • hewioe (unregistered) in reply to Zylon
    Zylon:
    pjt33:
    What's a fire handle?
    You can't expect Remy to waste time on proofreading. He's got easter eggs to bury, dammit!
    Surely it was deliberate to emphasise the Awesomeness of Alvin...
  • Ben Jammin (unregistered) in reply to jverd
    Matt Westwood:
    SomeCoder:
    Nagesh:
    Read #6.

    Higher quality is reason India is ading job and US is losing them.

    Hahahahahahaha!

    As someone who deals with outsourcing in India on a daily basis, this is hilarious; thanks for the laugh :) The reason India is adding jobs and the US is losing them isn't because of quality. It is happening because you pay literally pennies on the dollar for Indian engineers to crap out some code that is brittle, hard to use, and only sort of works.

    Indian engineers that know what they are doing come to America where they can make actual money. I've worked with lots of intelligent people like this. But I've never worked with any outsourced "talent" that could pass Programming 101. The old saying "you get what you pay for" is especially true in this case.

    I've been working with some outsourced talent which is fucking mustard...

    Hopefully, not on the clock

  • Tom (unregistered) in reply to Matt Westwood

    Join the club... I had to modify an internal tool recently, and I came up against a similar issue. I had to add a new service to the "Start/Stop Services" page, and when I got there, I found around 1000 lines of copied and pasted code, rather than what I would have expected: single function that would be called from the Click events of the 10 buttons on the page.

    Yes, the previous guy had copied and pasted 60 lines of code 8 times, modifying the service names as he went.

    Then he copied ALL OF THOSE in to one giant "Start All" and "Stop All" function.

    o.O

  • Tom (unregistered) in reply to PiisAWheeL

    I have seriously considered that, but I'd have to do a lot of switching back and forth, since I also use non-exclusive use computers... so in the end, it would be more frustration than it's worth.

  • Coyne (cs) in reply to J-L
    J-L:
    Ironically, this was the same person who would change lines of code like
    n++;
    to
    ++n;
    because it would make the programs run "blindingly fast" in tight loops.

    Maybe he just thinks it's better to be proactive rather than reactive? Never put off to the end what you can do at the beginning.

  • Tom (unregistered) in reply to rfoxmich
    Comment held for moderation.
  • Tom (unregistered) in reply to rfoxmich
    Comment held for moderation.
  • Ralph (unregistered) in reply to BR
    BR:
    The guy who was here before me used the file system as his version control system
    Oh, yeah, I know him. Although he had plenty of criticism for the lack of discipline among the development team, his own work was periodically "snapshotted" by tarring up his entire project and untarring it into another directory, named (badly) with the current date.

    When I was assigned to help him on a project that was going nowhere, he tarred up his entire hard disk (including the multiple snapshots, and the tar files containing those snapshots) and gave it to me to untar on my computer. After that, I was on my own. No documentation of status codes, interfaces, anything like that.

    I asked him what he wanted me to do when my changes were ready. "Oh just email me the file and I'll decide whether its worth keeping."

    After a couple months of this, and management pushing him to explain why the project was still going nowhere despite the help they gave him, he retired.

    And now, years later, the project is still nowhere. Turns out we didn't need it at all.

  • FuBar (unregistered) in reply to Tom

    Tom, you really have to learn to use the Quote button. No one knows what the heck your talking about.

Leave a comment on “Let Me Sleep on It”

Log In or post as a guest

Replying to comment #:

« Return to Article