- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
I broke a build once.... i forgot to check in some file... fortunatelly we didn't have to wear stupid hats or anything.. we just had to fixit before the next build.
Admin
Sadly this is more common that one may think. We came very close to not being able to perform any reboots without disconnecting the network first, because a network problem had coincided with a non-production server reboot one day.
God I hate this crap... And you were second anyway. CAPTCHA: "ewww" -- how fitting.
Admin
The real WTF is that they have no way of manually initiating a build. I consider that unacceptable. Broken Builds suck, yeah, but it shouldn't keep people from working longer than it takes to fix the error.
Admin
CruiseControl + NUnit = Solution
Admin
Uh?
CruiseControl + NUnit isn't going to help this situation any. For all we know, they were using CruiseControl and NUnit.
The problem is that nobody was allowed to check in code without verifying that it passed unit tests first, which would have been fine if the stupid manager hadn't insisted on sitting there and watching it compile at the developer's desk.
After all, the whole point of source control is that if someone commits something bad, you can always back out of it. Nobody wants to be the guy who breaks the build, but if your team never ever puts anything broken into source control... well, did you really need source control in the first place?
Admin
The real WTF is that the guy on the photo (with the funny hat - so he's obviously a Build Breaker) got to sit next to three beautiful women!
Captcha: cognac - well, I guess that would do as well!
Admin
And how exactly does this procedure stop builds from being broken by people who forget to check in a file. Whoops, the compile and test that you just watched means nothing because I forgot to add new file X to the repository.
Breaking the build is a fact of life. The best you can do is mitigate the damage, and fix it quickly.
The best policy I heard was that code should be checked in before lunch break, and a build starts at 12:00. If it fails, everyone should still be around to make sure that the build can be fixed.
As for breaking unit tests... Well, that's not too hard to notice. An automated script can run the unit tests, and if they fail you lynch the guy who broke it. :)
Admin
Most shops I've worked at have multiple build and test servers.
The idea is that every hour, a new build fires off on a build server, and a test server picks up on the latest completed build.
I mean, who takes overnight to build these days? Even a full Gentoo rebuild can take under an hour if you throw enough machines at it (distcc)
Admin
I can understand being concerned about unit tests, but if people are checking in code that doesn't even compile (save the case of forgetting to add a new file to the repository or something), then there are more serious issues at hand. If I'm working on something, and the code doesn't compile so I can't even do testing on it, the last thing I'm going to do is check it in. And if someone does precisely that, well...they shouldn't be working there anymore because even a total programming novice should understand why that doesn't make any sense.
Admin
We are quite happy with the delayed commit feature of TeamCity: http://www.jetbrains.com/teamcity/features/ide_integrations.html#Pre-tested_delayed_Commit
Admin
On one of my co-op jobs, the code took about 30 minutes to compile. (This was in 2003) One of the permanent employees changed the name of a variable and committed the new code to the repository.
He missed one. It was obvious that he didn't even try to compile the code. He didn't want to take the 30 minutes to check his work. I found the problem about 20 minutes into the compile. I had to unlock the unchanged file, change the name, re-compile, test, then check the file in.
Guess who was blamed for breaking the build?
Admin
Thank you for that epic tale.
Admin
This functionality is automated in Team Foundation Server. You can create a policy that will reject a check-in if things like unit tests, code analysis and ability to build do not pass. It is all automated and doesn't rely on a pointy haired boss. It is not a big deal. The real WTF is that they didn't use an automated system.
Admin
Agreed.
If this WTF took place recently, then yeah, they didn't have the best automated build practices, although look at the site you're on.
But if it was 5 or more years ago, I wouldn't be surprised at all.
Distributed builds FTW.
Admin
Even having the manager watch the local build won't prevent this scenario.
Stuff happens. That's why you do continuous builds - so you find out about it ASAP so it can be fixed. I may not remember what specific file(s) I worked on yesterday, but I surely remember what I worked on an hour ago.
Admin
I worked in a place once with a dedicated machine to automagically build and test every new commit. If it failed, the commit was rolled back. The problem was that if you merged sources while the machine was building, you ran the risk of having the test fail, which meant backing out the merge and remerging to the rollback code. So people started keeping a backup of their work before merging, which meant that everyone and their machines had become part of the source control. Of course, this was infinitely better than their previous method of not having source control. Did I mention that the company did in excess of $30m in revenue every year?
Admin
So simple; break the code in the repo, buy the the next round of beer for the dev-group. Works like a dream.
Admin
Dilbert has the "Plunger of Blame"
My Dept has a trophy of a Horse's hindquarters (donated by me). :-)
Admin
Ah yes. I've used a few different things: charging a quarter, a nasty picture of a cat on black velvet, plastic dog turd, etc.
The best one is to sit in the guilty party's cube and identify in minute detail the precise step of the written process that they failed to observe, (you do have it written down, right?), and in rare cases update the process.
The tedium of having someone sitting in your cube wittering on at you while everyone around you is listening seems to work.
Admin
We had the same system, except the "manager" was a script that did the compile/run/test for you, and added a check-in signature to the check-in comments. Server was set to reject check-ins that didn't have the signature. The signature was a hash of the list of changed files, so not trivial to forge (but of course possible).
The build would still break on the server, though, because someone would add a reference to a header file without adding it to source control, or similar problems not easily caught by local build-and-tests.
In the end, we ended up adding some ability to bypass this checking for cases where it "wasn't really needed," and build breakage went up a bit again, so I don't know what a livable solution is.
Admin
Hey, I've got a crazy idea: why don't you treat your coders like adults, and realize that they didn't mean to break the build, and if they have any integrity at all, the fact that they broke it is sufficient disincentive to do it again without stupid trophies, hats or penalties.
The presence of a "Broke the build" trophy or other stupid token is a big "find a new job where you're managed by adults" sign for me.
Admin
Such would not be fun, but boring. I believe you are ranting, because you were a victim of the "Build Breaker" hat policy perhaps?
Admin
Were people at least allowed to commit code to their own feature branches without this insane review procedure? Or did the team even use feature branches? If not, then that's a WTF in itself if consistency was considered that important.
And having to use local backups defeats much of the purpose of using source control.
Also, some source control systems use "change sets" or "revisions" to group changes by commit. Now let's say that if revision 7579 fails to build, you can actually retrieve revision 7578 and build that one. I know, it's kind of hard to believe...
Admin
I've only worked two places where they had one of these stupid "awards", and both places were squarely in the Dilbertoverse where they substituted "fun" policies for competent management. Given a choice between a reasonable deadline set by a manager who knows the technical challenges of what they're assigning and a "Hawaiian Shirt Day", I'll take the reasonable deadline.
But maybe that's just because I'm old. Or maybe it's because I'm experienced.
Admin
I was just pointing my personal experience and why it was so different from this WTF... but if you feel annoyed, well sue me.
Admin
For instance, I once worked at a place where building the application required generating vast numbers of source files from the tables of a database. Every single table had to be turned into a source file. Well, actually, every table had to be turned into several nearly-identical source files. But wait, then there also had to be several nearly-identical source files representing each table relationship.
On top of that, the ant build file was written by an idiot who didn't trust ant's ability to recognize timestamps. As a result, all files were always regenerated and recompiled, no matter what. I made a custom build file but most of the other developers weren't proficient enough with ant to do that at the time.
Admin
There are companies that don't have an automated test?
My basic programming class ten years ago had that - all projects were submitted electronically, and you could sit and watch it get "graded" in (more-or-less) real time.
I don't use anything similar myself at the moment, but that's because I'm a one-man team in a company that "doesn't have programmers" (mm hmm) - I assumed that real shops would have that sort of thing as Standard Equipment...
Admin
The WTF here was not the manager insisting on witnessing compiles, that was only your normal management bull that any developer should be able circumvent/sabotage/destroy.
No, the real WTF was that the build was being done overnight instead of immediately on commit. Why anyone would think this a good idea is beyond me.
And WTF are testers doing taking a new build off a dev branch every day? I'm guessing one of those monolithic QA fallacy processes where every single test in the 6,000 page document must pass or the whole lot have to be run again?
Admin
Sadly this happens more often then you know...1 in 6 of my last clients had some form of automated testing. Most the unit tests were a joke though -- I remember someone once saying, if it doesn't have an assert, it's not a unit test.
Admin
Uhm. OK. I see the WTF. But aside from that, this development house seems almost competent! They have automated builds! Unit tests! Development managers! Hell, it must be quite a slick operation to have testing carried out daily on the code that was only checked in the day before.
At my current place we have no unit tests, have no automated builds and often have to wait weeks before our code sees a tester. As a treat for you lot I might even see if I can sneak a few WTFs out of the codebase :)
Admin
one place I worked had a hat (one of those mad spikey multicoloured hats only people with no personality buy) you had to 'own' before checking in code to cvs.
#1 wait for 'hat' to become free #2 get hat #3 update source tree #4 build #5 run unit tests #6 if #4 or #5 fail, fix it #7 commit #8 relinquish hat
it was ok, the boss changed non-written-down requirements every week so nothing was getting done anyway
Admin
I would get lice on purpose just to infect the failureing hat.
Admin
We used to have a merge token at one place. We all worked in our own branches ran unit tests etc and when we had finisdhed we code reviewed then waited for the merge token to become available. Then we could merge our code. Only after we had built the project could we relinquish the merge token. This was fine until you made a change to certain header files cause then the build could take up to 6 hrs. Still it was a pretty good process.
Admin
What I don't understand, is why not use a source control program that actually runs all the tests before comitting the code? I know stuff like this exists, I don't know the name off the top of my head, but it would certainly remedy the pains of this situation.
Admin
So uh... Incredibuild anyone? With Incredibuild, there really are no excuses not to rebuild before checking in.
Admin
With a distributed version control tool like Bazaar (http://bazaar-vcs.org), it's pretty easy to guard the trunk with an automated PQM (Patch Queue Manager, https://launchpad.net/pqm). Developers don't get write access to the trunk (or whatever branch(es) you want to protect), instead they request that the PQM process does it for them (e.g. by a gpg-signed email). PQM can then do the merge in a temporary directory, run any tests you configure it to, and then commit the merge IFF the tests pass. When its done, it notifies the submitter of the result, and then waits for the next thing in the queue.
With this system, tests breaking on trunk is quite rare (although inadvertent time bombs in tests and other mishaps can still happen, it does catch the vast majority of problems).
Note also that this system doesn't require the developer's workstation to be tied up running the tests or even be sitting there for a long time waiting for a "svn ci" or whatever to complete, once the merge request is sent then you can e.g. unplug your laptop and go home immediately.
Admin
The most common failure for builds is to be working on new files, and check in only one or two when you have three or four. Things compile fine in your environment, but you forget to check in classes you just created. It would be insane to check in if it doesn't even compile on your machine (though people do that to).
Admin
Admin
Admin
I know I'm probably missing something here - granted, it's been a while - but if a build didn't work, couldn't one just send an email to the last person to check something in saying so and regress back to the previous set of files - because obviously you don't actually delete the previous successful build until you know you can make a new one, do you...?
I dunno - it just seems to me that if not breaking the build is that important, make sure the build can't break; and if it's that common, assume that any check-in will break the build and take steps to minimise the consequent pain. Much like ACID compliance, or journalling - assume it will go wrong, and make sure it won't matter when it does.
So much better than throwing tantrums, being micromanaged, or wearing stupid ass hats.
Admin
About 1992-ish, I was tasked with porting about a million lines of C++ from SunOS to Solaris, long before SUN put out any of the migration tools. At the time, it was a 3.5 hour top-to-bottom build. After a few months, I finally got the thing to compile and run correctly.
In parallel, another part of our organization had been doing this massive feature-upgrade, and had a branch checked out for 9 (yes, nine!) months.
Then came merge-day.
I ran the update, and discovered more than 3,000 differences. I slowly went through them all (terrified at having to debug God-knows-what) and resolved each one.
When I checked it in, I sent out a notice that everyone had better make sure their code complied with all of the changes (a document of what to change, and a script to stream-edit source files was provided).
There were 3 minor quirks that took me 30 minutes to resolve (I really expected it to be worse). Unfortunately, the subversive team refused to spend the time to make sure their code had all the SunOS/Solaris changes, and when they finally went to merge their branch into the main, they had a horrific time.
I was lucky; two levels of management told me to back out any and all changes that broke the build. The other team was PO'd, but they had to eat it.
Good times.
Admin
That's only if your revision control system is stupid.
bzr with a pqm-based workflow has been mentioned here already, and it's Good Stuff. For folks who can't get away from more conventional toolage, you can run "svn export" with a working copy as a source; this will create a tree containing only files which are properly checked in. Integrate that process with your build scripts... and there you go.
Admin
Concur ... and: any obviously demeaning work practice like that and your company is open to legal action by the employee(s) concerned.
Apart from the fact that a dev manager who manages his/her devs like this is well on the way to PHB-ness ....
Admin
So, essentially, you have stopped doing this:
[image]And this:
[image]So you've grown up and now treat adults like adults are supposed to be treated without giving in to PHB-ness or demeaning them any more.
Congratulations .... I guess.
Admin
And I believe that you, Sir, have no idea of the concept of "human dignity" ....
Admin
I joined a project on sourceforge recently and about the third thing I did was to break the build because I forgot to commit some stuff. The app would compile but fail to run because of missing content. Since its a project with developers across the world hats arent an option, snailmail would take too long, so it was enough to apologize, slap on a temp fix so it would at least run and fix it ASAP.
I work mostly on stuff that dont need compiling to work but in cases when compilabillity is a question it should not be too dificult to tag an uncompilable build broken and without trowing a fit make the build system revert to previous untagged revision? Would make the process fail safe and the hat can still be used to minimize broken builds.
Admin
Well I, too, have broken a build 2 or 3 times. All because I had forgotten to add a file in the repository. And we do have a Broken Build Policy: The build-breaker must buy everyone affected by the broken build a beer.
Admin
Currently working on a project with around 250 000 lines of c++ and zero tests. It certainly shows.
captcha: alarm - that's right.
Admin
Isnt it possible to revert to the latest working build ?
Then from there you continue to work locally and once things are fixed, you get latest version of project then put your changes
That's how we do it here
Admin
Everyone affected eh? I'd be claiming 'mental anguish' if it got me a free beer.