- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Admin
Yuppie is here in uk but means young up and coming expression kinda died in the 80's though :p
Admin
I suggest considering that the tests will typically have been written by the same individual that wrote the buggy software to begin with - or someone less capable (like the person in the team who has nothing more important to do and so got assigned to it).
For that reason, like proofs, automated testing is useful, but should be held has having minimal value (only slightly above "well, the code doesn't generate any compiler warnings").
Admin
By "automated testing ... has ... minimal value" I assume you mean other than the immense value in keeping programmers focused on the task at hand.
Admin
Do you know the difference between theory and practice?
In theory, there is none.
In practice, there is.
Admin
Tests (when proven – there's that word again – to cover a suitable set of functionality) show correctness. Establishing that the test suite is adequate is really hard. The advantage of them over just looking at the code is that you can automatically perform the tests, rather than having to use lots of manual effort. This Is A Good Thing, especially in practice.
Admin
Lol. Cowboy Code Monkeys unite!!!!
The 'anti-backup' league needs you! (They lost the production code)...
Mathematically speaking though, a proof (if valid) is proof of something (lol)... Even Mr Albert's attempt to predict experimental values of General Relativity mathematically didn't correctly align with experimental values detected... The mathematics behind (yummy tensor analysis) were spot on though. So much so that it's really beautiful how well it works when you get down to it (Gotta love the assumed summation with tensor notation).
All that being said... I'm still trying to figure out how one would write up a mathematical proof for MsgBox usage... ;P
Admin
I've never met a Comp Sci PhD (or, for that matter, a 4.0 Comp Sci student) that I thought was a good developer
Admin
Which again is proven because you´re "assuming" something? Awesome!
Admin
Admin
hmmm...apparently I need a phd to use this quoting functionality
Admin
Am I the only one who worries if there is a bug in the simulator? (Yes, I know the simulator has been proved right...)
Admin
Sheesh... I thought for sure the story was going to end with the "Avanced Technologies" group somehow being wiped out by a vat of molten steel. Bummer...
Admin
Back we go to the original quote.
One throw-away quote, with no implication of "proof" attached, and you're getting steamed up for some reason?Not that I'd claim that Sherlock Holmes is exactly "culture." The original quote is, however, a far better guide-line for anybody working with computers:
It is an old maxim of mine that when you have excluded the impossible, whatever remains, however improbable, must be the truth. (The Adventure of the Beryl Coronet, fwiw.)
In this particular case, Sherlock's maxim is far more relevant. Merely running a set of tests, however fine those tests might be, does not "exclude the impossible."
Admin
A lot of my friends are Vijays, and I've got to say that this is the first time I've come across one who is clearly an abject moron.
I've got another friend who was charged with writing a "real time compiler" (in his words) to determine what to do with several tons of molten steel at the other end of the rollers. Kicking off a full 45 seconds before the steel hit the back wall.
A wonderful opportunity for a 22 year old graduate, I feel. I've never been comfortable round steel mills since.
Admin
Somebody beat me to it but I'm going to post it anyway. Whenever reality disagrees with me, reality must be wrong.
Admin
Proving the correctness of a system does actually work and is sometimes done with very critical systems.
But you need to build the system with those formal methods from ground up, you cannot just try to prove stuff about normal systems that have the occational hack here and there. If you start making assumptions about how the progam behaves you always get it wrong.
Admin
A consultant is someone who takes what works in practice and tries to put it on paper...
Admin
I did, actually. It was already posted twice.
Admin
Oh for deity's sake.
Spock is most famous for saying 'Illogical'. Which would be a quote that would actually make sense in the context above. I don't see any reason to bring Sherlock Holmes into it. His brother Mycroft the High Optional, Logical, Multi-Evaluating Supervisor is more relevant...
Now that someone has, though, I have to point out that the 'Having excluded... whatever remains must be the truth' quote is illogical itself. It would be more accurate to say 'whatever remains must include the truth, as well as other untrue possibilities'.
Incidentally:
"CAPTCHA Test (Required For Anonymous Users) Prove that you're not a robot."
That test proves nothing of the kind. I may just be a robot with Captcha reading abilities...
Admin
Admin
There's a proof transfer protocol?
Admin
Admin
Nothing wrong with good old mathematical proof - used it myself and got some very good systems out of it. Problem comes when certain academics forget that in the "real-world" compilers, linkers, operating systems, hardware etc will fsck things up.
Of course trying to tell all this to a formal methods professor who doesn't have hands-on experience is doomed to fail.
Rarely do people use formal methods correctly anyway...
Admin
You are correct. It is not impossible that you are a robot with Captcha reading abilities. You have therefore not excluded the impossible.
The requirement to "prove that you're not a robot" does not imply that, having passed a weak test of proof, you are not a robot.
That would be illogical, captain.
Admin
I strongly disagree. This WTF shows just how valuable integration testing can be. The alternative of placing it into production and seeing if the steel is ruined is less attractive to me.
It's all about layering tests - isolated unit tests (with continuous integration to run them automatically on every check-in), integration tests, and user acceptance tests (possibly with other types of tests as well). Automating the majority of this insures that the tests are actually run.
Testing is a skill, and does require training and/or experience. But using TDD, I have found bugs in code that I've written that I would not have caught otherwise until much later in the development process.
Admin
Unfortunately, mathematics can't prove much about about complicated programs in unrestricted turing machines.
Why don't we have standard support for making objects into communicating sequential processes in the standard library of any language? Why don't we have a finite automaton other than regex(3) in the standard library?
Admin
There's an obvious distinction between systems that need to be built on formal proofs (a very small number, but not negligible), and systems that are engineered: ie, those that have a certain tolerance and can be tested via an increasingly narrowed set of unit and regression tests.
Then again, there's always PHP.
Admin
A proof is a guarantee of correctness when all assumptions hold true. It's very rare for those assumptions to cover everything that applies in reality and sometimes it does matter. Infinite precision is, for example, an assumption often used which doesn't hold in real life. Floating point inaccuracies can in fact be larger than the theoretical error bounds in some cases. The assumption that you can pay a guy $20/hr to do something instead of a computer is also usually ignored even if that strategy beats all the theoretical ones.
Admin
Admin
You attempt to explain cultural strangeness in the US is appreciated. However you have skipped the weird part. The weird part: Dog the Bounty Hunter is a TV show.
Really.
Admin
No WTF here, just a typical story of a learning experience of a witless genius.
The reason this does not make a WTF is the insistence on the use of a simulation. I can't praise simulations enough. My first large-scale professional job was the mathematical design for a gunfire control system for on board ship. We designed it around the simulation that we had programmed. Having proved the algorithms on that simulation (after several false starts), documented it carefully in pseudocode, farmed it out to the codewriters and built it into the system, we took it on board the customer's ship, plugged it in and switched it on, and it worked first time perfectly. It was the first ever commercial system ever delivered in Ada, delivered on time and within budget. And all because of our sweet little simulation. Happy days.
Admin
The other RWTF is that Vijay didn't break into the simulator at midnight to test his fix in secret before he went bragging about his "proof". Leave it to a PhD to fail at CYA.
Admin
Admin
Admin
It's the difference between being smart and being street smart.
Admin
No, not luck, although we might have thought so at the time. But then we repeated the design process, er, repeatedly. With the same level of success.
Admin
Admin
Lol.. try having a Astro Physicist PhD as a developer. Shot for the stars, but missed.
Admin
Like many others, I work in embedded controllers. I'm in the software team, and of course there is a corresponding hardware team.
As can be expected, there are those in the software team who blame all problems reflexively on bad hardware, and an equal number of hardware team members who instinctively blame the software for all ills.
While this goes by, the more jaded members of the teams work together to actually fix the problem of the day. We've all seen cases where it absolutely could not be our soft/hardware, it must be their hard/software, but at the end of the day, it really was us. So usually, we leave our egos at the door, try to work together and find the problem and/or some third party to blame (vendor, requirements, interface spec, etc.).
Generally speaking, things work out.
In years gone by, the old (read: ancient) head of The Hardware Team retired, and a Vijay type came in to replace him. At this point, the "hardware is fine, it's your software" rhetoric meter immediately went to 11, and stayed there.
The I/O test failed? Software.
New system board was literally emitting smoke when powered up? Software.
Screen is dark? Software (hint: software works better with the power on).
It didn't matter what the question was, the answer was "it's software's fault".
One day, a customer came in to run a V&V on some units before signing off. These units being six figure items, it's not surprising that this is a contractual requirement.
Two units passed the burn-in and 8 hour validation by V&V, with the customer witnessing and/or participating every step of the way.
On the third day, the third unit failed during power up. Vijay stormed into the software team area, absolutely livid.
"Your software", he started, "just failed! In front of the customer! We could lose a major sale! You have to fix it. Now!". Vijay was beside himself at this point.
"Okay, so how did the software fail?", I ask. While there was always a chance that something really was wrong in the code, the fact that two units had already passed the V&V tests filled me with a certain level of confidence.
"It says there was a hardware fault, and the customer won't accept it like that!", Vijay practically spit. "You have to change the software to say that the hardware's working!".
Sighs all around in the software room, as coders returned to their workstations/mountain dew/crossword puzzles/dilbert of the day, etc.
"Well Vijay", I asked, "have you ever considered the possibility that the software might actually have found a problem in the hardware? I mean, this is the HARDWARE SELF TEST software you're running, right?"
Numerous claims that there are no problems with the hardware, after all, two units already passed, etc.
At this point senior management (to a VP level) starts streaming in, trying to turn an on-site unhappy customer into a happy one. The customer rep himself is there. This has now ceased being a technical discussion and become a political one.
The VP of tech (my boss, and Vijay's) decrees that we'll pop open the box in question, and look at the board that the software claims is bad. We do so, and swap the bad board with another board type from a unit that's already passed internal V&V.
We rerun the preliminary tests. Lo and behold, the failing unit (now with a good board) passes. And the previously working unit (now with the swapped board) fails the power on self-test. Go figure. The customer nods; we'll restart the tests with this unit, and use it for unit 3.
At this point, the VP looks at Vijay, and says "well, I'd say that pretty much confirms whether it's software or hardware, wouldn't you say?"
"Of course!", Vijay said, close to yelling. "Their software is so bad it's not even predictable! This is why it takes so long to build good hardware! The test software is unreliable!".
The software people sighed.
The hardware people grimaced and/or quietly left the room.
The customer's eyes got really big.
The VP resembled a small, furry animal, trapped in the headlights of an oncoming car.
This was later described as a "jaw/floor" moment.
The unit was tested, the customer was happy (far more so than our VP), and it was shortly announced that our Vijay was to be "side-moted" to the Mechanical division. Vijay complained he wouldn't be able to do much in his new position, to which the VP replied, "that's pretty much the point".
Occasionally, executives do get it right.
Admin
http://books.google.co.uk/books?id=l-DzknmTgDUC&pg=PA116&lpg=PA116&dq=gp250+ferranti&source=bl&ots=2rgKA0phKl&sig=7O-BwJ5uz5g31JqX0Z1tkOrFVzo&hl=en&ei=OaEqS-mlDcj84AaaqNmOCQ&sa=X&oi=book_result&ct=result&resnum=1&ved=0CAoQ6AEwAA#v=onepage&q=gp250%20ferranti&f=false
Admin
It works well for a specific, known program. It works poorly for an arbitrary, variable program. By that, I mean that a particular program on a particular system is a much easier target for a proof than the space of all programs. This applies especially if the particular program is written with formal methods in mind.
People good at numerical analysis know this and track the inaccuracies to get correct error bounds. The naive model of infinite accuracy is a bad model for a floating-point unit. Numerical analysts therefore use a more accurate model; if you want to think about it, a still somewhat weak choice is to model floating-point units as acting on intervals of the real line.
Admin
You've obviously never used MY favorite IDE.
Admin
Story fail.
What was wrong with the patch?
What was wrong with the proof?
Admin
I'm not sure that code would be my tool of choice for getting out of one of those. Then again, I've never tried it, so ...
Admin
Final acceptance test: Stand between roller and back wall.
Either way, a problem is solved.
Admin
Vijay sounds like my sister before the sex change operation.
Admin
I don't get it - nobody has mentioned that perhaps the simulator itself has the bug?
Perhaps the code correctly sets the temperature to 256 degrees (or whatever), but due to sign issues in the simulator it displays it as -255... Thus the code is ok, but the simulator is buggy.
Admin
How do you test that the tests are correct?
How do you prove that your perception of reality ("tests don't work", "code does not compile") is correct?
How do you prove you're not just a brain in a jar talking to electrodes?
Admin
Hope you read this comment soon, before Mark get's his panties back inna twist.