• (cs)

    ahh yes, printf style debugging!

  • Fast Eddie (unregistered)

    You also didn't attach the cover sheet your the TPS report. I'll send you a another copy of the memo explaining how to do that.

  • Firsty First (unregistered)

    WOW ... Can't even bring WTF words to type ...

  • (cs)

    So the opposite of debug is... bug!

  • (cs)

    Obviously debugging was a security risk because the development server was hooked up to the production DB.

  • Mitur Binesderti (unregistered)

    So let's see if we can translate this one:

    The "programmer" couldn't get contract work anymore because he sucks. Then when he finally lands a job he refuses to read the company's security policies and after a few weeks of incompetence they let him go.

    Now he works as a fish monger in Pokipsy where you can still find him telling this old tale to anyone who'll listen.

  • Paul W. Homer (unregistered)

    It's scary, isn't it, how quickly a few poorly thought-out overly rigid rules can lead to such a degeneration of development practices. The goal is reasonable (improved security), but the practice is ridiculous. Simple answers to complex problems only make the problems worse.

  • Tim (unregistered)

    Don't knock printf. It does the job, is extremely simple, and doesn't alter the running of the program (e.g. timing, window focus, etc.)

  • Someone (unregistered) in reply to Mitur Binesderti

    How, exactly, does displaying debug messages on a development server, not a production one, break security? No one will see the debug messages but the developers, and if your developers are a security risk you're screwed already.

  • Zapp Brannigan (unregistered)

    There are some industries, banking and insurance for example that have reputations for peculiar development rules. As a consultant, Peter should have known that banks are the worst of the worst.

  • lul (unregistered) in reply to Someone

    that should be obvious

  • An embedded developer (unregistered) in reply to Tim

    Erm... I assume you've never tried using printf() in an embedded system. If you use it in something time-critical, say an interrupt handler, your timing will be shot to bits because of its complexity, not to mention that it often has problems with reentrancy. printf() is anything BUT extremely simple and it certainly can alter the running of a program.

  • Risk Manager (unregistered)

    Process Risk: Debugging software.

    Nature of Risk: exposure of internal state and process information.

    Mitigation: Write only perfect software.

    ROI Support for Mitigation: Development staff is not paid to write buggy software. Buggy software represents extremely poor developer productivity. Mandating perfect software supports expected productivity levels.

  • Zapp Brannigan (unregistered) in reply to Someone
    Someone:
    How, exactly, does displaying debug messages on a *development* server, not a production one, break security? No one will see the debug messages but the developers, and if your developers are a security risk you're screwed already.
    It would be easy to miss some of debug code before promoting to the production server. I think there was an example of something like that on this site a few days ago.
  • ClutchDude (unregistered) in reply to Risk Manager

    That....sounds very familiar. It's like a project kick-off statement or some other nebulous metric.

    I'm waiting for the day where I see something like that come across my desk(top).

  • Horrified Real-Time Systems Developer (unregistered) in reply to Tim
    Tim:
    Don't knock printf. It does the job, is extremely simple, and doesn't alter the running of the program (e.g. timing, window focus, etc.)

    Spoken like someone who has never developed for real-time systems. I learned the hard truth 25 years ago, when adding 5 assembler instructions (including a debugging snapshot dump) to a mainframe interrupt handler made the entire routine take longer than the undocumented 12 ms wall time system limit for interrupt processing. Interrupt processing threads demoted to batch priority don't work well, especially when they lose access to hardware control registers they absolutely need to service those pesky high-speed communication devices.

    Yeah, when you have the luxury of non-time-critical code and actual user-visible interraction, printf() and the like is a win. But there are limits.

  • Ken B (unregistered)

    "Debug? We don't need to debug. We get it right the first time!"

  • Zapp Brannigan (unregistered) in reply to An embedded developer
    An embedded developer:
    Erm... I assume you've never tried using printf() in an embedded system. If you use it in something time-critical, say an interrupt handler, your timing will be shot to bits because of its complexity, not to mention that it often has problems with reentrancy. printf() is anything BUT extremely simple and it certainly can alter the running of a program.
    In embedded land isn't toggling an output or blinking an LED the equivalent of printf()?
  • Ken B (unregistered) in reply to Mitur Binesderti
    Mitur Binesderti:
    Now he works as a fish monger in Pokipsy where you can still find him telling this old tale to anyone who'll listen.
    ITYM:
    s/Pokipsy/Poughkeepsie/
  • masseyis (unregistered) in reply to Tim
    Tim:
    Don't knock printf. It does the job, is extremely simple, and doesn't alter the running of the program (e.g. timing, window focus, etc.)

    printf IS a potential security hole, because it's likely that at some point you'll printf some secure data to an insecure output. And it's easy to leave them in.

    Remote debugging a dev machine with fake data. Don't see the problem there...

  • (cs)

    Not a WTF. Ok, their security policies are stupid and get in the way of doing things, that's it. It's not like they are photocopying lines of code to be fed to the test server.

  • Dennis (unregistered)

    the last thing we'd want is an end user gleaming sensitive system information from a debug message!

    Ah, gleaming white and shiny new! The best kind of sensitive information!

    BTW, it should be "gleaning".

  • (cs) in reply to Ken B
    Ken B:
    Mitur Binesderti:
    Now he works as a fish monger in Pokipsy where you can still find him telling this old tale to anyone who'll listen.
    ITYM:
    s/Pokipsy/Poughkeepsie/
    You still picking your feet?
  • David Emery (unregistered)

    There are a lot of techniques that build on actually looking at and understanding the code, rather than writing main() {}; and then invoking a debugger. The most extreme of these is Cleanroom (http://en.wikipedia.org/wiki/Cleanroom_Software_Engineering).

    Personally, any time I had to invoke a debugger on my code, I took that as a personal failure, that I didn't now what my code was doing. But then, I'm used to working with languages that support substantial compile-time and even runtime checking.

    That being said, having a firewalled test and integration environment is certainly a best practice. So to me this is a combination of a "WTF" with respect to the corporate policies, and frankly in my mind a "WTF" with respect to how one develops software. Score me strongly on the side of getting it right before executing the code (and that often means a lot of compile cycles until the compiler and static checking tools & I together agree the code is correct.)

  • (cs) in reply to Mitur Binesderti
    Mitur Binesderti:
    So let's see if we can translate this one:

    The "programmer" couldn't get contract work anymore because he sucks. Then when he finally lands a job he refuses to read the company's security policies and after a few weeks of incompetence they let him go.

    Now he works as a fish monger in Pokipsy where you can still find him telling this old tale to anyone who'll listen.

    Nope, I don't think that's the translation. And don't use place names you can't spell. What's doubly-WTFy is that you could actually have looked it up since you're on the World Wide Web when you respond!

    See, there's this new thing called Google. And this other thing called Wikipedia. You should try them.

  • RandomUser223957 (unregistered)

    Perhaps they're talking about a different kind of "Trace" statement, but in my experience, there is a site-wide setting that disables output from debugging statements, and prevents the system from displaying traces when it has 500s.

  • (cs) in reply to David Emery
    David Emery:
    Score me strongly on the side of getting it right before executing the code
    ...as opposed to those of us who deliberately write bugs into our code so we'll have something to fix?
  • Steve (unregistered)

    I inherited a asp.net application from another developer, as we didn't trust him to deploy anything onto our production server, and I wasn't going to deploy his stuff without going through it with a fine-tooth comb.

    So I load up the project on my machine. The first thing I do when starting a new project is setting it up on my development machine and step through the code -- see how it works.

    This application generated a bunch or errors when I tried to do this. I figured I must not set up something properly on my machine. But I knew what the problem was in the back of my mind -- the other developer "didn't believe in it."

  • Portsnap (unregistered)

    I assume the WTF is that they didn't invite him as a consultant. Right?

  • Joe Namath (unregistered)
    Peter worked as a highlyly paid IT consultant

    I don't care whether the Jets are strug-gah-ling...

  • Tephlon (unregistered) in reply to mrprogguy
    mrprogguy:
    Mitur Binesderti:
    *SNIP* Pokipsy *SNIP*

    SNIP And don't use place names you can't spell. What's doubly-WTFy is that you could actually have looked it up since you're on the World Wide Web when you respond!

    See, there's this new thing called Google. And this other thing called Wikipedia. You should try them.

    heh... Have you tried finding Poughkeepsie in Google or Wikipedia when all you know is that it sounds like "Pokipsy"?

    I do agree that he shouldn't have used a placename he doesn't know.

  • (cs)

    The average TDWTF reader cannot be expected to understand the subtle, yet deadly differences between "to" and "two".

  • AdT (unregistered) in reply to dpm
    dpm:
    The average TDWTF reader cannot be expected to understand the subtle, yet deadly differences between "to" and "two".

    Beet me too id.

  • Bob (unregistered) in reply to Tephlon

    Google search for "Pokipsy, NY", first result is the Wikipedia page on Poughkeepsie, New York. Not difficult.

  • (cs) in reply to An embedded developer

    I remember a time when printf() debugging proved a compiler error.

    I had some embedded code that was acting dead wrong and so I put in some printf() debugging and captured the output of the serial port. The code then worked!

    Eventually I narrowed it down to one printf() that caused the code to work - and it didn't even matter what you printed, it could just be printf("!");

    I then compared the assembler output listing from the compiler and discovered that the optimization screwed up! A value that was calculated, then written into a memory location was being reused in the next line of code and instead of re-reading the value from RAM the register holding the result was used instead - except that the wrong register was being used!

    Put a printf() between the two lines and as this compiler passed the first few arguments in registers the compiler didn't attempt to optimize register usage across the call - and so the code worked. Arg!!!!

  • (cs)

    I've worked in a company where remote debugging was turned off on the shared development server. The reason was that the website was far too complicated to install locally. I would imagine that this system at the bank is similar. I'll bet they even have encrypted settings for the website stored in the registry.

    I'll bet that because they had a shared development server, they had no choice but to disable the remote debugger. The reason is that if a developer ran the debugger it would stop the server for all the other developers.

    For me, such and environment wouldn't be a problem because I'm an old school developer. But for many of the new programmers fresh out of school they can't live with out it.

  • Chelloveck (unregistered) in reply to Zapp Brannigan
    Zapp Brannigan:
    In embedded land isn't toggling an output or blinking an LED the equivalent of printf()?

    Sure is, and an oscilloscope is a lovely debugging tool. The last real embedded project I was on I managed to get the EEs to reserve an output and a couple pads for nothing but a debug LED. The LED wasn't populated by the board house so it didn't add cost on the BOM, but I could solder one on to my test system. It was pure heaven, I tell you, to have an LED all to myself without having to re-purpose the power LED or something.

    Still takes a couple assembly instructions to toggle, though. As the poster above noted, that can be a killer in an ISR.

  • Joe Namath (unregistered) in reply to Horrified Real-Time Systems Developer
    Horrified Real-Time Systems Developer:
    Spoken like someone who has never developed for real-time systems. I learned the hard truth 25 years ago, when adding 5 assembler instructions (including a debugging snapshot dump) to a mainframe interrupt handler made the entire routine take longer than the undocumented 12 ms wall time system limit for interrupt processing. Interrupt processing threads demoted to batch priority don't work well, especially when they lose access to hardware control registers they absolutely need to service those pesky high-speed communication devices.

    Yeah, when you have the luxury of non-time-critical code and actual user-visible interraction, printf() and the like is a win. But there are limits.

    I've never developed a time-critical, real-time system like the one you've described... but wouldn't your objections hold equally true for using a debugger as well?

  • Murray (unregistered)

    Log4j is the way to go for trace/debug statements. Its configurable through a properties or XML file so it would be near impossible to forget debug and trace statements if the production environment is configured properly. As for performance, they have metrics that tell you the cost of a trace statement not writing anywhere. I have not worked with log4net but I am sure its very similar to log4j.

  • kobe (unregistered) in reply to Tephlon
    Tephlon:
    mrprogguy:
    Mitur Binesderti:
    *SNIP* Pokipsy *SNIP*

    SNIP And don't use place names you can't spell. What's doubly-WTFy is that you could actually have looked it up since you're on the World Wide Web when you respond!

    See, there's this new thing called Google. And this other thing called Wikipedia. You should try them.

    heh... Have you tried finding Poughkeepsie in Google or Wikipedia when all you know is that it sounds like "Pokipsy"?

    I do agree that he shouldn't have used a placename he doesn't know.

    heh...You should try it Tephlon. Poughkeepsie is the first result returned if you google search "Pokipsy NY".

  • 2Pac (unregistered) in reply to dpm
    dpm:
    The average TDWTF reader cannot be expected to understand the subtle, yet deadly differences between "to" and "two".
    Two bullets to the head, too.
  • (cs)

    I was hoping it would turn out that they don't get paid for any time spent debugging code, or comments. Each line of code that is not in the final version or does not contribute to the program is $10 off his paycheck.

  • jay (unregistered) in reply to An embedded developer
    An embedded developer:
    Erm... I assume you've never tried using printf() in an embedded system. If you use it in something time-critical, say an interrupt handler, your timing will be shot to bits because of its complexity, not to mention that it often has problems with reentrancy. printf() is anything BUT extremely simple and it certainly can alter the running of a program.

    Erm ... He said he's writing reports for a banking system. I'm guessing that doesn't involve any time-critical embedded software. The fact that printf debugging is impractical for a rather specialized class of programming is hardly an argument against using it in cases where it is highly practical. If one of those advocating printf debugging really meant to say, "This is the only debugging technique any programmer could ever possibly need in any real or hypothetical programming project", I would certainly agree that that was absurd. But your argument here is a little like saying, "Some people live in houseboats. Therefore automobiles are a stupid and useless idea as a means of commuting, because if you tried to drive one to your houseboat, you'd sink to the bottom of the river."

    And, umm, the alternative discussed was remote debugging tools. Are you saying that stepping through code with a debugger does NOT cause problems in time-critical embedded code? How fast can you press that "step" button, anyway?

    Personally, I rarely use a debugging tool because I've found that 99% of the time, the time I spend single-stepping through code is just too tedious. I generally find it much easier to toss in a few printf's or the equivalent to display the relevant state at points where I think the problem might be. I only resort to a debugger when I am really lost. I've been at my present job 2 years and have used a debugger once.

  • jay (unregistered)

    Ohhhh, you want a debugger! Next thing you'll be wanting a compiler instead of hand-compiling your code like the rest of us. Aren't we getting awfully cushy here? Would you like the maid to bring you a cup of tea, sir?

  • Stig (unregistered)

    There is no debugging task so onerous that it cannot be solved by throwing more grads at it.

  • Dan (unregistered) in reply to GettinSadda
    GettinSadda:
    I remember a time when printf() debugging proved a compiler error.

    I had some embedded code that was acting dead wrong and so I put in some printf() debugging and captured the output of the serial port. The code then worked!

    Eventually I narrowed it down to one printf() that caused the code to work - and it didn't even matter what you printed, it could just be printf("!");

    I then compared the assembler output listing from the compiler and discovered that the optimization screwed up! A value that was calculated, then written into a memory location was being reused in the next line of code and instead of re-reading the value from RAM the register holding the result was used instead - except that the wrong register was being used!

    Put a printf() between the two lines and as this compiler passed the first few arguments in registers the compiler didn't attempt to optimize register usage across the call - and so the code worked. Arg!!!!

    HP compiler? I've had that problem before too.

  • (cs) in reply to jay
    jay:
    Would you like the maid to bring you a cup of tea, sir?
    That sounds rather nice, actually. Is she Irish?
  • Stig (unregistered)

    "I've been at my present job 2 years and have used a debugger once"

    So you're the bugger that eschews the debugger, eh?

    When I get my hands on you....

  • (cs)

    I think this was anonymized from Credit Swisse, which uses Haskell as an internal language. Debugging Haskell is different, and can truly be best described as "looking at it" (the code). Assuming one is using the Turing complete type system to their advantage, if a program type checks but does something unexpected, one can narrow the problem down very quickly. Indeed, Haskell is used in high assurance environments specifically because of its easy verification.

  • Stig (unregistered)

    "Next thing you'll be wanting a compiler instead of hand-compiling your code like the rest of us"

    Hand-compiling code? CODE? You need a quasi-english language to spell out your program? Do you point out the words with your finger when you read?

    Write it in native binary like a man, chuck.

Leave a comment on “Debugging is Risky”

Log In or post as a guest

Replying to comment #268020:

« Return to Article