- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
If you had worked 40 hours a week and told them for 18 months that everything was going to plan, they would have got exactly what they paid for, you would have enjoyed those 18 months a lot more, and you would have found a new job just the same.
Admin
Admin
Um okay, got one. Being passed over for promotion in favour of the CEO's nephew. Boring and pedestrian, but, yeah.
Admin
So what's the "right" way to deal with features that aren't really assigned to a specific release? At any given time, the project I'm working on has several half-finished features which will be shipped either in the next release (if they're finished in time) or in the one after.
How we currently handle this is similar to the anti-pattern Alex describes, with "dev" and "main" branches/shelves. But within each of those, testing and such is done on immutable, labelled builds. We seem to avoid the Jenga pattern most of the time by associating commits with bugs and merging all the commits for a given bug at the same time.
We also have a branch for each release, but it's not created until the release is mostly finished.
I guess the obvious fix would be to improve our release-defining, but that's not likely to actually happen.
How do other people handle this sort of thing?
Admin
This has nothing to do with distributed vs centralized: if a source control system has a locking mechanism, then a "Check-out/Edit/Check-in" style of development is possible. Some systems (Microsoft Visual SourceSafe) mandate locking, where as others (SourceGear Vault) don't.
Again, nothing new. This is the whole idea behind Edit/Merge/Commit style of development. Distributed doesn't make merging any easier.
Well, there's no good reason to fork (label, branch, or shelf) your code "offline", so you're really only left with one thing: viewing history. There are some advantages to that.
Admin
Yeah but if you've gone 18 months without leave you've built up enough to take it all at once and have a nice loooooong rest before starting at the new place all fresh and, er, having forgotten how to get out of bed at 6 a.m. Er, yeah, I can see why that wouldn't work.
Admin
That's still going to be annoying though, and can easily become very nasty with a large development team. Sync - Commit BONG (A beat you to it) - Sync - Commit BONG (C) - Sync - Commit... Finally! What was I doing again?
So VSS really is the worst application ever conceived for this... I was starting to think that maybe we were just using it wrong! (More ammo for switching to something better.)
Admin
That's true, except that most (all?) DVCSes have figured out that Edit/Merge/Commit sucks. Much better to Edit/Commit/Merge with anonymous branches where necessary.
This way, if we happen to work on the same file, we aren't forced to merge prior to committing our fixes. We can solve our problems in isolation, and then worry about combining them. Much simpler.
CVCS could do this, of course, but generally don't AFAIK.
Admin
Your release process is a little broken, but only semantically. Instead of considering these "in progress" features that will go in a "TBD release", assign them to a specific release from the get-go. You can always move them around as things change.
There's nothing wrong with using shelves - heck, create one for each feature if you really want - just don't create your "release candidate" builds from them. Merge changes into a release branch and then createa build from that to run through the gauntlet.
Admin
Admin
Good point, and one that seems to go back to personal/team preference. I find it awfully silly to say "let's just merge later", but then again I like the Check-out/Edit/Check-in style myself. Not enough to put up a fight, though. I would probably complain about it, however.
Admin
As a mathematician sticking around philosophers, I have to say that the choice of word "dimension" is quite, very, bad.
Dimensions should be independent things, that together span the whole set of possibilities; phase space or whatever you want to call it. But here it's a hierarchical ordering... They're levels of precision in a way. So why not "level" (to plug in the basic concept of low/high level languages) or "order"? That's what you're conveying anyway.
That, or some far-fetched analogy (bits&bytes= bones or cells, files&filesystem= flesh or organs, mutations= .. ) to be worked out.
Admin
Not if you use HAL_VS! It's wonderful! It even writes the code if you ask it nicely...
Admin
This isn't simply for the same files, either. It's quite possible that you rely on some behavior in a part of the system that you're not changing but that someone else is. Dealing with that mid stream just makes things more difficult than they need to be.
It's also more obvious that you're actually merging since you actually use a merge command, as opposed to an update. I can't imagine working under a checkout style regime on anything of substance.
Admin
This doesn't happen in practice. If you have that many people simultaneously working on the same files, you will have much bigger problems.
Big teams are compartmentalized by module, so in effect it's just a bunch of small teams integrating modules together. These integration decisions (i.e. cross-over) are best determined ahead of time (and documented!), not at commit-time.
VSS is among the best of the worst I'd say. You haven't seen SCM hell until you've worked with configspecs. shudder
Admin
If you find it's happening a lot, and you're always working on the same file(s), then you might find it pays to do some refactoring.
OTOH if it's because they always let three people loose at a programming job at once, and you're always fighting with each other over a commit, there's something iffy about the business process.
I would have thought it rare for more than one person to need to work on the same file at once unless there's something really funny with your system configuration.
Admin
And we do create a branch for each release before the first release candidate is built. It does end up with everything that makes it to the main branch before then, but it's generally only the dev branch that has works in progress. So I guess our system isn't all that far from "right" after all.
Thanks for the advice!
Admin
Sometimes (actually most of the time in my experience) the branch represents a feature-frozen version that is being readied for a stable release... no merging there, though many changesets will be propagated "upstream" in the process.
Admin
With regards to Edit/Merge/Commit:
I linked to Joel Spolsky as he explained it much better than me. My post was pointing to an effect of the way distributed source control works that I particularly like.
Distributed version control systems do not think in terms of bits/paths or file versions.
They work in terms of deltas, and only deltas. Version [GUIDB] is Version [GUIDA] +this, -that.
This means that you don't Edit/Merge/Commit - you Edit/Commit/Edit/Commit/Edit/Commit then PUSH and/or PULL - and the push/pull is when the merge happens.
In both DCVS and CCVS there will of course be places where the merge needs a human to sort it out no matter how clever the system.
However, in a distributed system, that human merging always happens at the end, not in the middle - so it doesn't break your chain of thought.
This encourages small commits in DCVS, while the behaviour of CCVS actively discourages them.
It's generally acknowledged that small focused commits are better - less work later if you find something has broken.
Small focused commits in DVCS mean less work now and less work later, while in CCVS, small focused commits mean more work now.
Of course there are - the same reasons you'd fork otherwise!Life doesn't stop just because somebody nuked the server - those projects still have deadlines!
Admin
OTOH, refactoring is of course one of the tasks that makes that event rather likely!
Admin
Yes, but it has a bug where it sometimes deletes vital functions.
Admin
Admin
The way it's handled with git: create a new branch for every new feature, and every bug report/ticket number.
While working on a given branch you can always pull in from other sources, and when it's done, merge it back into whatever release branch you want.
Git makes this all very easy.
Admin
20000 Using firebird/interbase (its worse than access).
Access is a little db for simple use, its not WTF to use in correct situation, but its easy to abuse. There nothing wrong in using simple rdbms.
Firebird is crap, access can sustain more records and users.
Admin
And as I said in my comment, every DVCS has had to solve this in one way or another in order to be at all viable.
You not only just described how Git seems to work (and it does this in seconds, most often in under a second), but you also described how recent builds of SVN with merge-tracking work (it just ended up taking half an hour for it to finish merging, and that's not counting any manual conflict resolution).
A little thought will show that the "tangle ball of rubber bands" is precisely the problem DVCS was invented to solve. Maybe an illustration is in order: Linus' role in Linux these days is essentially merging patches from other people. To do this, he pulls and merges from about a hundred top-level contributors (generally subsystem maintainers), who themselves pull and merge, or apply patches from, people lower down.
There does have to be a common ancestor for the repository itself, but beyond that, everything else typically just works.
And for those manual merges -- few and far between though they may be -- I have my Git configured to launch kdiff3, so I get a nice graphical 3-way diff I can edit.
I can believe you're not astroturfing for Perforce, but if you're going to claim that it's the only one which gets merging right, you really need to try Git. It's free (and open source), requires very little set-up (a server is optional, not required), and there are free books full of documentation out there.
Admin
There's nothing wrong with locking. You just don't like it.
That's fine... and I don't like not locking; it's just a matter of team preference.
Yes yes, and time is really just discrete snapshots of the universe as it moves in a direction towards greater entropy. But realistically, we need watches and timezones... and realistically, files are bits/paths and (in revision control) have a history of changes.
I understand how directed acyclical graphs work and that a file can be multi-headed and have multiple "current" versions. But what does that mean in practice? You can't get the file without resolving the merge... thus it's effectively just a reverse lock.
Again, nothing new here. Except a lot of confusion for the developers who have a hard enough time grasping the 3-dimensions of revision control.
A commit should represent a reasonable attempt at implementing a specific task. Thus, it's the tasks that should be kept small, not the commits. This is an important distinction -- if tasks are big but commits are kept small, then commits and tasks become further separated.
Admin
i liked VMS built in versioning: MyProg.c;1 MyProg.c;2 MyProg.c;3
Admin
Trwtf is 3857 words?
Admin
Backup a directory tree and reload and you get a clone of what you started with. Put that directory tree into version control, and check it out again, and it's nowhere near a clone.
I'm doing my first Drupal web site, and looked into Subversion and Git. I was horrified at how much Version Control does not track:
File ownership.
File access; e.g. which pieces must be writable by others?
File timestamps; forcing modtime=NOW is convenient for 'make' but there's no option.
Database contents; e.g. MySQL. Try to put a MySQL backup into version control and you get one diff line per table.
My Linux (case sensitive) system had two files, named "Install.text" and "install.text"; the Subversion repository was on OS X (case insensitive). An svn checkout on OS X confused svn terribly. Not sure whether the repository or the checkout was blown.
As far as I can see, "Version Control" means "Source Code Version Control", and it is not yet ready for Web 3.0.
Admin
I.E. developpers who know nothing about SQL think it's only good to store objects in a table, thus take no advantage of the tool and thus design an application that uses little or no features which IS a WTF in itself, for using the wrong tools for the job.
I'm not a DBA and I'm quite surprised to see how much other devs have no clue about SQL in general (yes, all of you who use MySQL can be included in this if you think innoDB is strictly ACID compliant for example, etc.) - in the end, know your tools and use them right, also remember some tools are USELESS for some projects, there is NO using them right (like access for anything or MySQL for complex applications).
In the end, the only good ones are and will be those who try to do better every single time, spend time reading and learning all they can (and posting their own fails on tdwtf for our enjoyment).
Admin
Another real WTF: Not staying at home when you're too ill to go to work.
Admin
...source code control was done on floppy disks. The release code was written to a floppy (5.25") and write protected and put in the fire safe. To comply with BS5750, we also printed the source out on a huge sheaf of paper. As a backup. Happy days.
Admin
Admin
The article and indeed preceding comments mentioned this very fact. Do try to keep up.
Web 3.0? Now is that a geometric increase in bullshit, or an arithmetic one? I was under the impression that the Next Big Fad was finally implementing the semantic web.
What it certainly seems to be heading towards is a complete reimplementation of an operating system using nothing but javascript and HTML. In this situation, I'd expect file metadata to be in its header in some suitable form, and therefore trivially amenable to source control.
Admin
Not even Microsoft use it internally. Haven't done for years. They've had their own perforce-based thing for a little while (which was awful) but nowadays they've eaten their own dogfood and moved to TFS.
Given that there are lots of lovely tools for migrating out of awful old control systems that are so atrocious even their creators prefer never to look at them ever again, TRWTF would presumably be carrying on using it.
Admin
The file myobject.h v5 probably won't be compatible with myobject.cpp v5.
The repository needs to be able to tell you the state of the entire 'fork' at each point in time, so you can pull out myobject.h and myobject.cpp as they both were at a specific point in time.
The core thing is that you shouldn't think in terms of files, you should be thinking in terms of changes made to the whole 'fork'.
Of course, this isn't specific to DCVS against CCVS.
However, it was thinking in terms of 'files' rather than 'fork state' that got VSS and CVS into that mess.
Yes, that is a very good point.However, most tasks are divisible - and generally they are easily divisible beyond what a reasonable manager should need to ask a programmer.
"Fix Bug A" is generally a reasonable request. However, once the programmer gets into the code, they'll probably find that there are several disparate elements that cause the bug that all need to be fixed to properly solve the bug.
So as "Fix Bug A" is now known to actually be several smaller tasks, the programmer should provide these as separate commits.
This is something DCVS makes easy - the programmer doesn't need to ask anyone, doesn't risk losing a lock on the necessary files, or need to wait for a lock on another file once they realise it's important, and doesn't need to merge anything (introducing unknown elements) until the 'big' task of "Fix Bug A" is done.
So DCVS encourages good practice, while CCVS actively discourages it.
Admin
Rational ClearCase is used for others, that's better as it does at least have atomic commits, but it's not much of an improvement and rather complex to use.
Manglement appear to think it would be too difficult to migrate to anything else.
Admin
Admin
The only ones claiming that MySQL is acid compliant, is MySQL / Oracle themselves.
ACID : 'C' compliance means any transaction will bring the database from a consistent state to a consistent state, both of which of course respect every single rule implemented in the system.
Due to the way MySQL treats CASCADE, triggers will NOT be fired on cascade operations, which violates the consistency rule by making a cascaded action bypass triggers which inherently contain consistency rules.
On the same topic, MSSQL's trigger nesting is limited to 32 levels, which implies that in the event that a 33rd trigger should have been fired, the database will be left in an inconsistent state, thus breaking 'C' compliance aswell.
On the exact same topic, PostgreSQL's trigger nesting is NOT limited and their doc states developers should be careful not to create infinite trigger loops.
I do not know Oracle a lot but I would expect it to do the same as Postgres, considering how both are extremely focused on SQL standards, consistency and reliability.
Yes, most people don't care and most people don't notice and most people don't quite understand what ACID means and buy the sticker wether it's true or not, and that is why you can read everywhere that InnoDB is fine - written by people who don't use triggers/cascades/both (at least I hope so ...consequences would be interesting).
On the same ACID topic, for those who are interested, the 'I' is a very interesting beast ;)
Admin
Anyhoo, you could listen to http://nosql.mypopescu.com/post/1085685966/mysql-is-not-acid-compliant, if you're bored. Might be a bit outdated nowadays. Some of you may find it familiar...
Admin
Admin
Especially nowadays when there exists (a) wireless internet and (b) neat little tables which can extend over a sickbed that will easily accommodate a laptop.
Send the email, tell them "WFB".
Admin
IIRC "write protect" was performed by munching a slot out of the cardboard which formed the envelope for the disc. You could un-write-protect it by sticking a pic of opaque tape (insulating tape, gaffa tape, wotever) over the notch.
Admin
CAPTCHA: incassum -- incassum you didn't know, it means you are barely literate.
Admin
The most important thing that distributed version control (DVCS) bring is the third workflow (besides Check-Out (Lock) - Edit - Check-in (Unlock) and Edit - Merge (Update) - Commit mentioned in article):
See e.g. "Understanding Version Control" by Eric S. Raymond, or "Version Control by Example" by Eric Sink.
Admin
I believe it's a way Nagesh uses to measure the trolliness of his comments.
There's a certain magical amount of total length of red squigglies that, when achieved, captures the most flames. To short only snags the hardcore spelling snobs, and too long only gets a few morans.
Admin
you must be fun at dinner parties...
Admin
I don't know how Perforce does it, and if it does it, but with default "recursive" merge strategy (merge algorithm) that Git uses it can deal with case where there are multiple common ancestors, like e.g. in the case of so called criss-cross merge.
Same with Git. And all that it requires is for merge commits to remember all its parents...
Nb. with Git you can even merge unrelated branches, e.g. incorporating code which was developed independently as now a part of larger project.
Admin
Unfortunately the wikipedia page is full of shit, unsupported pro-mysql crap and stuff (like google uses mysql instead of 'some very minor google apps use mysql'.. Etc.) and I've had the displeasure to witness the bias around it - on the other hand ... wikipedia is mysql based. (and I doubt they even have a single dba in their dev team, when you see the number of dead links to paragraphs long deleted ;) ).
Never tried the BDB or the other one they say should be ACID .. .but if it's as ACID as innodb (which they used to say was acid until i modified it --), ... meh
Nobody talks about it because dba's who see that as a problem are dba's used to better and stricter rdbms's (Oracle,postgreSQL,...) who would at best use mysql as a "cheap" solution, if at all.
Admin
So, we'll be able to see your replacement Wikipedia article on this subject when?
Understanding its limitations, I find Wikipedia a huge asset. I believe it's a Good Thing to correct inaccuracies and mistakes as soon as you see them. Although arguments over matters of opinion based on personal preferences are probably best kept away from, as we all have far too much of that sort of thing to do here instead.