• abigo (unregistered) in reply to Steve H
    Steve H:
    fw:
    the real WTF is that every line in today's article makes sense.

    Not if you live outside the US. Coffee club? J Crew? Dog the what?

    <3 u.

  • me (unregistered) in reply to Big G
    Big G:

    Dog the Bounty Hunter...Definitely worth looking up a photo if nothing else.

    It's worth looking up 'Vijay' in Google images as well. I can picture the guy now.

  • Ouch! (unregistered) in reply to BdH
    BdH:
    to which the VP replied, "that's pretty much the point".
    Wow, how did such a lucid person ever get promoted to VP? Anyway, do your best to keep him around as long as possible.
  • Jase (unregistered) in reply to cdosrun
    cdosrun:
    Big G:
    In case your Google isn't working: J Crew: Clothing for yuppies.

    I'm not sure "yuppies" will translate either. :-)

    Yuppies traditionally care more about appearance than function. Clothing for yuppies would, stereotypically, be expensive, poorly made, not last very long, and show the Logo of the designer prominently.

    I don't own any J Crew, nor am I familiar with the name, but that's the typical view of the "Yuppie" from my own culture.

    It's a bastardisation of an acronym (I can't exactly remember what for, but it's something like Young Upwardly-mobile Professional)

    Basically it refers to people who have more dollars than sense and like to show off what they consider to be their immense wealth

  • Blinkin (unregistered) in reply to Sounds Familiar
    Sounds Familiar:
    dkf:
    For mortals, it's usually better to use the proofs to establish completeness of coverage of the automatic test suite. (Without tests? You're not a software engineer, you're a cowboy code monkey.)
    I don't agree that automated tests establish completeness. Performing automated tests only demonstrates that a library or application performs as the test expects it to. It does not prove it actually works as intended.

    I suggest considering that the tests will typically have been written by the same individual that wrote the buggy software to begin with - or someone less capable (like the person in the team who has nothing more important to do and so got assigned to it).

    For that reason, like proofs, automated testing is useful, but should be held has having minimal value (only slightly above "well, the code doesn't generate any compiler warnings").

    What a peculiar thing to say, sounds to me like: a) The test shouldn't be written by anyone who understands how the program works 'under the hood' b) The testing shouldn't be done by anyone who doesn't understand the intricacies of the program They sound a little opposing to me....

    That said, I'm a big fan of seperate teams of testers, who understand an application, but not necessarily how those applications work. They can then test expected functionality, with no bias for or against corner cases (other than obvious ones like value=0). I think the original author of the code has some responsibility to do fairly basic testing, however I would imagine for most programmers personal pride means that they've tested it enough to be absolutely certain that it works (and then get upset when the testers identify that it doesn't)

  • Blinkin (unregistered) in reply to Sounds Familiar
    Sounds Familiar:
    dkf:
    For mortals, it's usually better to use the proofs to establish completeness of coverage of the automatic test suite. (Without tests? You're not a software engineer, you're a cowboy code monkey.)
    I don't agree that automated tests establish completeness. Performing automated tests only demonstrates that a library or application performs as the test expects it to. It does not prove it actually works as intended.

    I suggest considering that the tests will typically have been written by the same individual that wrote the buggy software to begin with - or someone less capable (like the person in the team who has nothing more important to do and so got assigned to it).

    For that reason, like proofs, automated testing is useful, but should be held has having minimal value (only slightly above "well, the code doesn't generate any compiler warnings").

    Oh, and I missed a bit of your point (sorry)

    I think Automated testing can (when used properly, and made by competent testers) be a very valuable regression testing and/or load testing tool.

  • Shinobu (unregistered)

    A proof is essentially a construct that turns assumptions into a conclusion using reasoning steps. If the conclusion is proven wrong by reality, that means one of two things:

    1. A reasoning step was wrong. I don't know how many proofs (including my own) I've sent back to the drawing board by asking the why question. Or sometimes simply: ‘Wait a minute...’ People make reasoning errors.
    2. One of the assumptions was wrong. And this is why I think they should have studied the proof longer. They agreed to the proof, so I assume they checked the reasoning thoroughly, which means that one of the assumptions behind the proof about how the system operates is false. In other words, the real system isn't behaving like the system in their minds. Now, of course I can't tell from my comfy chair what's wrong. Some people say the simulator may be buggy, but I doubt that is causing the problem. After all, it displays the same faulty reading as the real system in the same conditions. All I can say is that there is potentially a lot to be gained from studying faulty proofs. For example, an acquaintance of a friend of mine studies clinical trials of homoeopathy and that can teach us a lot about faults in trials and human reasoning about statistics. So don't trash proofs. They are a bit like unit tests of the human mind and you disregard them at your peril.
  • Mike (unregistered)

    I'm curious, is it possible to prove things like finishing within a certain execution time or working within the numerical limits of the hardware?

  • db (unregistered) in reply to fourchan
    fourchan:
    I think this is the third steel mill article. I wonder if it's a product of anonymization or if steel mills are actually such WTFy places.

    Every steel mill I've been to has holes in the roof from some major WTF incident. That even includes one rod rolling mini-mill that was less than six months old with a roof height as high as about a third of the length of the entire rolling line.

  • (cs) in reply to Mike
    Mike:
    I'm curious, is it possible to prove things like finishing within a certain execution time or working within the numerical limits of the hardware?

    Is assembly language on given equipment, sure.

    Addendum (2009-12-17 21:05): I mean "in", not "is".

  • (cs) in reply to Shinobu
    Shinobu:
    Now, of course I can't tell from my comfy chair what's wrong. Some people say the simulator may be buggy, but I doubt that is causing the problem. After all, it displays the same faulty reading as the real system in the same conditions.
    We don't actually know this either; it may be (I hope it is) the case that the bug was detected no later than during simulation and hadn't yet escaped into the real world.

    It is of course also possible that it was first spotted in the wild and only picked up during simulation when the latter was put through the specific scenario that triggered it - we don't know how comprehensively the simulation was put through its paces - but that would make for two bugs: the real-world one and the lacuna in the testing regime.

  • (cs) in reply to Mike
    Mike:
    I'm curious, is it possible to prove things like finishing within a certain execution time or working within the numerical limits of the hardware?
    The first one is easy enough: run the program (you are talking about a program, right?) for the requisite length of time and see if it finishes.
  • (cs) in reply to Gumpy Gus
    Gumpy Gus:
    So he proved the rest of the program, the compiler, the run-time libraryies, all the device drivers, and the OS were correct?

    Wow, that's quite a guy.

    And, oh, did he prove that his proof was correct? And did he prove that his proof check was correct? ....

    Yep. Turned out to be a hardware problem.

    Actually, I'd be very amused if it were that, or user error, or his patch never actually got applied, etc.

  • A Gould (unregistered) in reply to Maurits
    Maurits:
    Story fail.

    What was wrong with the patch?

    What was wrong with the proof?

    • The problem with the patch is that it didn't fix the problem.

    • The problem with the proof is probably related to the fact that he "solved" their software the same afternoon that he started with the company. (He was shown around the office in the morning, given a bug-fix, and brought his proof in the afternoon). That's not really enough time to develop a formal proof of a piece of software.

  • (cs) in reply to hoodaticus
    hoodaticus:
    Mike:
    I'm curious, is it possible to prove things like finishing within a certain execution time or working within the numerical limits of the hardware?

    In assembly language on given equipment, sure.

    Addendum (2009-12-17 21:05): I mean "in", not "is".

    Here is a function of one argument, in assembly. Can you prove that it halts in finite time for any positive input less than 1 billion without trying them all?

    000000000040058c <foo>:
      40058c:       55                      push   %rbp
      40058d:       48 89 e5                mov    %rsp,%rbp
      400590:       48 83 ec 30             sub    $0x30,%rsp
      400594:       48 89 7d f8             mov    %rdi,-0x8(%rbp)
      400598:       48 89 75 f0             mov    %rsi,-0x10(%rbp)
      40059c:       48 83 7d f8 02          cmpq   $0x2,-0x8(%rbp)
      4005a1:       74 56                   je     4005f9 <foo+0x6d>
      4005a3:       48 8b 45 f0             mov    -0x10(%rbp),%rax
      4005a7:       48 83 c0 01             add    $0x1,%rax
      4005ab:       48 89 45 e0             mov    %rax,-0x20(%rbp)
      4005af:       48 8b 45 f8             mov    -0x8(%rbp),%rax
      4005b3:       83 e0 01                and    $0x1,%eax
      4005b6:       84 c0                   test   %al,%al
      4005b8:       74 18                   je     4005d2 <foo+0x46>
      4005ba:       48 8b 45 f8             mov    -0x8(%rbp),%rax
      4005be:       48 89 c2                mov    %rax,%rdx
      4005c1:       48 01 d2                add    %rdx,%rdx
      4005c4:       48 8d 04 02             lea    (%rdx,%rax,1),%rax
      4005c8:       48 83 c0 01             add    $0x1,%rax
      4005cc:       48 89 45 e8             mov    %rax,-0x18(%rbp)
      4005d0:       eb 15                   jmp    4005e7 <foo+0x5b>
      4005d2:       48 8b 55 f8             mov    -0x8(%rbp),%rdx
      4005d6:       48 89 d0                mov    %rdx,%rax
      4005d9:       48 c1 e8 3f             shr    $0x3f,%rax
      4005dd:       48 01 d0                add    %rdx,%rax
      4005e0:       48 d1 f8                sar    %rax
      4005e3:       48 89 45 e8             mov    %rax,-0x18(%rbp)
      4005e7:       48 8b 75 e0             mov    -0x20(%rbp),%rsi
      4005eb:       48 8b 7d e8             mov    -0x18(%rbp),%rdi
      4005ef:       e8 98 ff ff ff          callq  40058c <foo>
      4005f4:       89 45 dc                mov    %eax,-0x24(%rbp)
      4005f7:       eb 07                   jmp    400600 <foo+0x74>
      4005f9:       48 8b 45 f0             mov    -0x10(%rbp),%rax
      4005fd:       89 45 dc                mov    %eax,-0x24(%rbp)
      400600:       8b 45 dc                mov    -0x24(%rbp),%eax
      400603:       c9                      leaveq
      400604:       c3                      retq
    
  • hol (unregistered)

    Didn't I hear this steel mill thing before? I only got past the introductory paragraph, but hey, I grew up on slashdot.

  • (cs) in reply to Rodnas
    Rodnas:
    Well, it proves that reality has got it wrong again.

    Vijay must have been an HGTTG disciple.

    "...though it cannot hope to be useful or informative on all matters, it does make the reassuring claim that where it is inaccurate, it is at least definitively inaccurate. In cases of major discrepancy it was always reality that's got it wrong."

    --Douglas Adams (RIP)

  • Planck (unregistered) in reply to arty
      400594:       48 89 7d f8             mov    %rdi,-0x8(%rbp)
      400598:       48 89 75 f0             mov    %rsi,-0x10(%rbp)

    That's not a function of one argument, it's a function of two arguments.

  • (cs) in reply to iToad
    iToad:
    For some people, when their model of reality disagrees with reality, then reality is wrong.

    Such is the way of Republicans.

  • Robert Kosten (unregistered) in reply to Jase
    Jase:
    cdosrun:
    Big G:
    In case your Google isn't working: J Crew: Clothing for yuppies.

    I'm not sure "yuppies" will translate either. :-)

    Yuppies traditionally care more about appearance than function. Clothing for yuppies would, stereotypically, be expensive, poorly made, not last very long, and show the Logo of the designer prominently.

    I don't own any J Crew, nor am I familiar with the name, but that's the typical view of the "Yuppie" from my own culture.

    It's a bastardisation of an acronym (I can't exactly remember what for, but it's something like Young Upwardly-mobile Professional)

    Basically it refers to people who have more dollars than sense and like to show off what they consider to be their immense wealth

    I was under the impression it was "Young urban professional", I've always liked "dinks" (Double income, no kids), too :-)

  • (cs) in reply to Blinkin
    Blinkin:
    Sounds Familiar:
    dkf:
    For mortals, it's usually better to use the proofs to establish completeness of coverage of the automatic test suite. (Without tests? You're not a software engineer, you're a cowboy code monkey.)
    I don't agree that automated tests establish completeness. Performing automated tests only demonstrates that a library or application performs as the test expects it to. It does not prove it actually works as intended.

    I suggest considering that the tests will typically have been written by the same individual that wrote the buggy software to begin with - or someone less capable (like the person in the team who has nothing more important to do and so got assigned to it).

    For that reason, like proofs, automated testing is useful, but should be held has having minimal value (only slightly above "well, the code doesn't generate any compiler warnings").

    Oh, and I missed a bit of your point (sorry)

    I think Automated testing can (when used properly, and made by competent testers) be a very valuable regression testing and/or load testing tool.

    All these people who consider that unit testing is of limited use are probably the same ones who have never actually worked on a system with full and comprehensive modular testing strategy. Don't worry, their loss, all the less competition in the universe for quality software.

  • Herby (unregistered) in reply to DaveyDaveDave
    DaveyDaveDave:

    Heh - I have a network admin friend who was employed at the same (I hope there isn't more than one) steel mill, to replace one of the guys who went to prison for criminal negligence, or some such charge.

    He had an anecdote about how his predecessor had bought a large batch of network cards from a rather questionable source. After some time, when several of them had been used to replace faulty cards at various disparate locations around the network, it was discovered that they all had the same MAC address. Hilarity ensued.

    Ah... Same MAC addresses. I know a bit about this. I knew a company that produced serialized PROMS that had MAC addresses for a network card vendor. They network card vendor contracted out the assembly, and specified that the PROM be single sourced (from my friend). The contract assembly house in an attempt to cheapen the build decided on their own to make the part themselves, and having a "sample" they just peeled up the label and saw a PROM and copied the data. All of the network cards would EASILY pass unit test, but out in the wild, they would fail miserably. The network card vendor came over to my friends PROM programming facility with one of the assemblies yelling up a storm, and my friend noted that "that isn't my label on the PROM, they look like this" (producing a proper one). The next action the guy did was definitely not a WTF moment, as he picked up the phone and issued an immediate STOP WORK/You're fired notice to the contract assembly house. My friend said it was a sight to see. The guy was OBVIOUSLY mad but not at him (thankfully for my friend!).

  • Curt Sampson (unregistered) in reply to NightDweller

    "From now on i am going to remove all my unit tests and replace them with comments detailing the proof of how the system is sure to work correctly!"

    Well, it doesn't work if you just do it in the comments, of course, but if you've got a good statically typed language with a reasonably powerful type system (such as Haskell, OCaml or ML) you can construct reasonable proofs that certain types of errors can't occur, and then the compiler checks that proof and refuses to compile the program if it can't verify it.

    Moving from Ruby to Haskell so I could do this reduced my testing burden considerably. I'm very pleased.

    It is, of course important to understand the limits of this technique. My compiler doesn't have a verified proof, so it might validate things that are not valid. But testing has similar limitations.

  • amet (unregistered) in reply to BdH
    BdH:
    Occasionally, executives do get it right.

    Really? So, if someone is incompentent enough to not be able do his/her job, you don't fire him/her? And instead just make that person someone else's problem?

  • Mads H. (unregistered)

    Damn them clever u-na-versety types, always provin' stuff that don't needs provin', just more elbow grease. Good thing Robert put that fella' in his place.

  • titter.com (unregistered) in reply to katastrofa
    katastrofa:
    Gumpy Gus:
    So he proved the rest of the program, the compiler, the run-time libraryies, all the device drivers, and the OS were correct?

    Wow, that's quite a guy.

    And, oh, did he prove that his proof was correct? And did he prove that his proof check was correct? ....

    How do you test that the tests are correct?

    How do you prove that your perception of reality ("tests don't work", "code does not compile") is correct?

    How do you prove you're not just a brain in a jar talking to electrodes?

    You obviously don't understand how science works, but seeing as your nickname means "catastrophe", that's no wonder.

    But just so that you can't say I didn't answer your question: The tests are correct if the conditions you're testing for are covered by them. My perception of reality is correct if it agrees with the measurements done by other people, ideally if a causality can be estabilished. Of course, other people could also be delusional. The point is that, if everyone is delusional in the same way, then the delusion is reality, since reality is what we perceive. Of course if that is the case, you don't call that delusion. Delusion is the domain of minorities. I can't prove that I am not a brain in a jar talking to electrodes. That's why I don't try to. That's how science works. It doesn't talk about things that are not measurable.

  • (cs) in reply to Curt Sampson
    Curt Sampson:
    "From now on i am going to remove all my unit tests and replace them with comments detailing the proof of how the system is sure to work correctly!"

    Well, it doesn't work if you just do it in the comments, of course, but if you've got a good statically typed language with a reasonably powerful type system (such as Haskell, OCaml or ML) you can construct reasonable proofs that certain types of errors can't occur, and then the compiler checks that proof and refuses to compile the program if it can't verify it.

    Moving from Ruby to Haskell so I could do this reduced my testing burden considerably. I'm very pleased.

    It is, of course important to understand the limits of this technique. My compiler doesn't have a verified proof, so it might validate things that are not valid. But testing has similar limitations.

    You can do something similar in C# using code contracts: http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx

  • PinkFloyd43 (unregistered)

    You should of put it in production, after Vijay sent an email to everyone indicating that he had fixed it and if anyone had questions/issues to get with him! It's the only way to chop the head of Vijay!

  • (cs)

    There is a psychological condition called 'Cognitive dissonance' which is clearly in place here. Unfortunately this is often experienced by programmers. Even more unfortunately, the smarter the person in question is (often self-implicitely 'proved/n' by the level of (completed) education), the more likely it is that the reasoning will become something like: "I'm smart, so even if I do something really stupid, I can't get it wrong"...

  • AC (unregistered) in reply to Jase
    Jase:
    cdosrun:
    Big G:
    In case your Google isn't working: J Crew: Clothing for yuppies.

    I'm not sure "yuppies" will translate either. :-)

    Yuppies traditionally care more about appearance than function. Clothing for yuppies would, stereotypically, be expensive, poorly made, not last very long, and show the Logo of the designer prominently.

    I don't own any J Crew, nor am I familiar with the name, but that's the typical view of the "Yuppie" from my own culture.

    It's a bastardisation of an acronym (I can't exactly remember what for, but it's something like Young Upwardly-mobile Professional)

    Basically it refers to people who have more dollars than sense and like to show off what they consider to be their immense wealth

    Oh, so a douchebag/guido?

  • An Onimizer (unregistered) in reply to justsomedude
    justsomedude:
    In theory there is no difference between practice and theory, but in practice there sure as hell is!

    Oh dear. I'm so totally fed up with this little gem of wisdom.

    When someone quotes it to me after a WTF, it usually turns out that

    • cowboy mode was applied instead of any theory at all

    • the wrong theory was applied

    • the correct theory was applied incorrectly, e.g. due to insufficient understanding of the theory

    I have yet to meet the case where the theory was insufficiently advanced to solve the problem - in which case you should work on the theory before you try a solution.

    The above adage is just a poor attempt at being witty after having botched it, usually by people with an attitude of "Theory? We're doing real work here!" As if computers in a university and computers in an office were somehow different. The first people to make a magnetic drum memory, for example, weren't cowboys but professors, and without them the fabled garage companies wouldn't have happened, period.

  • frits (unregistered) in reply to An Onimizer
    An Onimizer:
    justsomedude:
    In theory there is no difference between practice and theory, but in practice there sure as hell is!

    Oh dear. I'm so totally fed up with this little gem of wisdom.

    When someone quotes it to me after a WTF, it usually turns out that

    • cowboy mode was applied instead of any theory at all

    • the wrong theory was applied

    • the correct theory was applied incorrectly, e.g. due to insufficient understanding of the theory

    I have yet to meet the case where the theory was insufficiently advanced to solve the problem - in which case you should work on the theory before you try a solution.

    The above adage is just a poor attempt at being witty after having botched it, usually by people with an attitude of "Theory? We're doing real work here!" As if computers in a university and computers in an office were somehow different. The first people to make a magnetic drum memory, for example, weren't cowboys but professors, and without them the fabled garage companies wouldn't have happened, period.

    You win the dum-dum prize. The inventor of magnetic drum memory was an engineer and businessman: http://en.wikipedia.org/wiki/Gustav_Tauschek

    Many inventions come from industry and goverment research, not professors. See "Bell Labs". If theory were sufficient, why have theorectical and experimental physisists? Surely there is no need to test all these wonderful theories. In fact, theory only comes after making careful observations, not before.

  • Organizer (unregistered) in reply to brazzy
    brazzy:
    Even when steel prices were highest, 150 tons of steel were worth MUCH less than "millions of dollars" (more like 20,000) - heck, even stuff that might be made from 150 tons of steel and includes a lot of energy and labor costs (e.g. a ship's engine) only has low 6 figure prices AFAIK.

    If your software bug destroys the plant, then you'll lose millions in production opportunities.

  • Organizer (unregistered) in reply to JiP
    JiP:
    There is a psychological condition called 'Cognitive dissonance' which is clearly in place here. Unfortunately this is often experienced by programmers. Even more unfortunately, the smarter the person in question is (often self-implicitely 'proved/n' by the level of (completed) education), the more likely it is that the reasoning will become something like: "I'm smart, so even if I do something really stupid, I can't get it wrong"...

    This is why Prolog still needs to be taught. It shows that bad premises lead to bad conclusions. Also, failed Prolog proofs can lead to infinite recursion, which smacks the point home!

  • laoreet (unregistered) in reply to Shinobu
    Shinobu:
    A proof is essentially a construct that turns assumptions into a conclusion using reasoning steps. If the conclusion is proven wrong by reality, that means one of two things: 1) A reasoning step was wrong. I don't know how many proofs (including my own) I've sent back to the drawing board by asking the why question. Or sometimes simply: ‘Wait a minute...’ People make reasoning errors. 2) One of the assumptions was wrong. And this is why I think they should have studied the proof longer. They agreed to the proof, so I assume they checked the reasoning thoroughly, which means that one of the assumptions behind the proof about how the system operates is false. In other words, the real system isn't behaving like the system in their minds. Now, of course I can't tell from my comfy chair what's wrong. Some people say the simulator may be buggy, but I doubt that is causing the problem. After all, it displays the same faulty reading as the real system in the same conditions. All I can say is that there is potentially a lot to be gained from studying faulty proofs. For example, an acquaintance of a friend of mine studies clinical trials of homoeopathy and that can teach us a lot about faults in trials and human reasoning about statistics. So don't trash proofs. They are a bit like unit tests of the human mind and you disregard them at your peril.
    The article never states that they agreed with his proof. Granted, the whole article's another bowytched job, so it's impossible to know whether they did or not, but the nearest we get in the article is a carriage return between the sentences
    While patting his notebook, he smiled and concluded, "therefore, this is my proof that the code will work — in writing!"
    and
    "Vijay,you work is quite impressive," Robert began, "but would you mind if we went over to the simulator and tried it out?"
    So, i would have to go with 2) Dog-food your own assumptions.
  • Monday (unregistered)

    It's much smarter to try and prove yourself wrong

  • microtherion (unregistered) in reply to frits
    frits:
    Many inventions come from industry and goverment research, not professors. See "Bell Labs".

    Ah yes, Bell Labs, home of Brian Kernighan (PhD, Princeton), Dennis Ritchie (PhD, Harvard), Ken Thompson (MS, UCB), Bjarne Stroustrup (PhD, Cambridge). Jes' plain folks using their gawd given common sense.

    [This is not intended to dispute frits' point, BTW, but the original article's gratuitous PhD bashing]

  • Shinobu (unregistered) in reply to laoreet

    I think we went our separate ways much earlier: when I read ‘Vijay left Robert's ... the actual fix.’ I pictured him explaining to Robert what he had done. Then Robert's reply seems to confer his agreement. But you're right, that may be a false assumption, and Robert's reply may have been simple sarcasm. In which case I would want to work there even less, for obvious reasons.

  • EmperorOfCanada (unregistered)

    The best comeback I ever heard to a guy working on his Masters in Comp. Sci. while complaining that the manager of the project had no degree at all was from the manager: "You needed someone to show you how to do this stuff?"

  • G (unregistered)

    There is no way in known universe for 150 tons of iron, molten or otherwise, to cost "millions".

    Other than that, good story..

  • (cs) in reply to G
    G:
    There is no way in known universe for 150 tons of iron, molten or otherwise, to cost "millions".

    Other than that, good story..

    That kind of depends on where it is and how it got there.

  • (cs)

    I am fairly confident that something like this *can* be proved, as long as every part of the system is deterministic, but I imagine the proof for something like this would be at least a few thousand pages long, unless the system is highly decoupled and you are allowed to assume perfect inputs into the component which you are working on, in which case it may only be dozens or hundreds of pages.

    A fun exercise at school, but completely valueless in the real world.

    edit: jesus fucking christ, is there ANY logic as to whether the comment submission adds new lines to your post?? sometimes it does and sometimes it doesn't.

  • (cs) in reply to Blinkin
    What a peculiar thing to say, sounds to me like: a) The test shouldn't be written by anyone who understands how the program works 'under the hood' b) The testing shouldn't be done by anyone who doesn't understand the intricacies of the program They sound a little opposing to me....

    As long as the code is reviewed (including the tests) -- I think you're okay having the same person write the tests. If the test does something weird like inline asm to poke values in a certain place in memory, then you raise a defect.

  • Henry Elsner (unregistered) in reply to Bim Job

    Did he get paid at least?

  • Henry Elsner (unregistered) in reply to Bim Job

    Did he get paid at least?

  • BBT (unregistered) in reply to anon

    Paper disproves Spock.

  • mauhiz (unregistered)

    tantrum fail

  • (cs)

    On one hand, I absolutely love how infallible mathematics is. On the other, I love how the Universe will occasionally say "Up yours!" and do something you never expected. I've always been torn between thinking Mathematically and thinking Scientifically. Of course in this case, I've had so much code blow up in my face I wouldn't dare put mission critical code into place without testing it first.

  • hoodaticus (unregistered) in reply to arty

    I don't need a whole function to satisfy the OP's request; all I need to do is show that any one assembly instruction can be proven to be processed in a given time. Take a hypothetical instruction that requires x clock cycles and divide x by the speed of the processor. Voila! Proof.

    There are CompSci classes that test on this.

  • jw (unregistered)

    I've met this guy. He was proud that using XP he never had to touch the mouse, at the cost of a little productivity of course, but style points count big in coding, right? Mouses are so 1989.

Leave a comment on “The Proven Fix”

Log In or post as a guest

Replying to comment #:

« Return to Article