• gnasher729 (unregistered) in reply to QJo
    QJo:
    There was so much to do and the deadlines were so tight it resulted in me working on the average of 70 hour weeks (all overtime unpaid) and being denied any decent leave for 18 months. Surprise was expressed at my exit interview that one of the reasons I was leaving was that I was tired.

    If you had worked 40 hours a week and told them for 18 months that everything was going to plan, they would have got exactly what they paid for, you would have enjoyed those 18 months a lot more, and you would have found a new job just the same.

  • Ghost of Nagesh (unregistered) in reply to The poop... of DOOM!
    The poop... of DOOM!:
    Paratus:
    The poop... of DOOM!:
    The "Real" WTF:
    6000 - using ACCESS as a database
    7000 WTFP for using VB 7000 WTFP for using PHP

    VB and PHP are certainly RWTFs, but there's no way that they're worse than using Access.

    He said using Access as a database, so you can combine that.

    A PHP application calling an Access database would result in 13000 WTFP (and a developer who's been committed to a mental hospital)

    And hosting the system on Unix and the DB on windows. WIn!!!!

  • (cs) in reply to QJo
    QJo:
    Anonymous Cow-Herd:
    QJo:
    10^7 WTFP for turning up at an interview for your perfect job wearing brown shoes
    Pff, I once overslept on an interview day, and as a result arrived unwashed, unshaved, with an unironed shirt and shoes that had been dusted off rather than polished, and 20 minutes late. I got the job and started the following Monday.

    Maybe Graham's number WTFP for using SourceSafe for anything other than to demonstrate to management that you're using it.

    Any advance on Grahams's Number! Who's first up for suggesting a particular WTF is worth Aleph-Null WTFP?

    Um okay, got one. Being passed over for promotion in favour of the CEO's nephew. Boring and pedestrian, but, yeah.

  • (cs)

    So what's the "right" way to deal with features that aren't really assigned to a specific release? At any given time, the project I'm working on has several half-finished features which will be shipped either in the next release (if they're finished in time) or in the one after.

    How we currently handle this is similar to the anti-pattern Alex describes, with "dev" and "main" branches/shelves. But within each of those, testing and such is done on immutable, labelled builds. We seem to avoid the Jenga pattern most of the time by associating commits with bugs and merging all the commits for a given bug at the same time.

    We also have a branch for each release, but it's not created until the release is mostly finished.

    I guess the obvious fix would be to improve our release-defining, but that's not likely to actually happen.

    How do other people handle this sort of thing?

  • (cs) in reply to Part-time dev
    Part-time dev:
    One of the really key things I've found with Git is that you never have to 'check-out' a file and two people working on the same file is rarely an issue.

    This has nothing to do with distributed vs centralized: if a source control system has a locking mechanism, then a "Check-out/Edit/Check-in" style of development is possible. Some systems (Microsoft Visual SourceSafe) mandate locking, where as others (SourceGear Vault) don't.

    Part-time dev:
    If I'm slower than her, when I sync with her to merge in her changes (or a central server) I magically get her fixes - unless we both edit the same method, it Just Works.

    Again, nothing new. This is the whole idea behind Edit/Merge/Commit style of development. Distributed doesn't make merging any easier.

    Part-time dev:
    you can all keep working - including rollbacks, 'forks' and 'labelling' - when the network (or server) is down and know that it's not going to be painful to put it all together when the sysadmins get it fixed.

    Well, there's no good reason to fork (label, branch, or shelf) your code "offline", so you're really only left with one thing: viewing history. There are some advantages to that.

  • (cs) in reply to snoofle
    snoofle:
    QJo:
    30 hours weekly unpaid overtime for a long time
    We've all done that early on in our careers. Once burned, twice shy. Once you learn to see it coming you can make for the exit long before you're exhausted (who wants to show up at a new job the first day - needing a vacation?)

    It's the fools who keep doing it over and over that cause management to continue this practice of abusing and then discarding employees.

    Yeah but if you've gone 18 months without leave you've built up enough to take it all at once and have a nice loooooong rest before starting at the new place all fresh and, er, having forgotten how to get out of bed at 6 a.m. Er, yeah, I can see why that wouldn't work.

  • Part-time dev (unregistered) in reply to Abso
    Abso:
    Part-time dev:
    With VSS, if she gets to the repository before me then I can't do anything. In other ones, it's extremely easy for either of us to accidentally wipe out the other's work. (Sync A, Sync B, Commit A, Commit B - A's commit just vanished!)

    That's really annoying.

    VSS may be that much of a WTF, but that doesn't mean that every non-distributed source control system is. In this specific situation, even CVS will refuse to commit B's changes until B updates/syncs again.
    Interesting, that's better than I'd been lead to believe.

    That's still going to be annoying though, and can easily become very nasty with a large development team. Sync - Commit BONG (A beat you to it) - Sync - Commit BONG (C) - Sync - Commit... Finally! What was I doing again?

    So VSS really is the worst application ever conceived for this... I was starting to think that maybe we were just using it wrong! (More ammo for switching to something better.)

  • (cs) in reply to Alex Papadimoulis
    Alex Papadimoulis:
    Part-time dev:
    If I'm slower than her, when I sync with her to merge in her changes (or a central server) I magically get her fixes - unless we both edit the same method, it Just Works.

    Again, nothing new. This is the whole idea behind Edit/Merge/Commit style of development. Distributed doesn't make merging any easier.

    That's true, except that most (all?) DVCSes have figured out that Edit/Merge/Commit sucks. Much better to Edit/Commit/Merge with anonymous branches where necessary.

    This way, if we happen to work on the same file, we aren't forced to merge prior to committing our fixes. We can solve our problems in isolation, and then worry about combining them. Much simpler.

    CVCS could do this, of course, but generally don't AFAIK.

  • (cs) in reply to Abso
    Abso:
    So what's the "right" way to deal with features that aren't really assigned to a specific release?

    ...

    I guess the obvious fix would be to improve our release-defining, but that's not likely to actually happen.

    Your release process is a little broken, but only semantically. Instead of considering these "in progress" features that will go in a "TBD release", assign them to a specific release from the get-go. You can always move them around as things change.

    There's nothing wrong with using shelves - heck, create one for each feature if you really want - just don't create your "release candidate" builds from them. Merge changes into a release branch and then createa build from that to run through the gauntlet.

  • (cs) in reply to Part-time dev
    Part-time dev:
    Abso:
    Part-time dev:
    With VSS, if she gets to the repository before me then I can't do anything. In other ones, it's extremely easy for either of us to accidentally wipe out the other's work. (Sync A, Sync B, Commit A, Commit B - A's commit just vanished!)

    That's really annoying.

    VSS may be that much of a WTF, but that doesn't mean that every non-distributed source control system is. In this specific situation, even CVS will refuse to commit B's changes until B updates/syncs again.
    Interesting, that's better than I'd been lead to believe.

    That's still going to be annoying though, and can easily become very nasty with a large development team. Sync - Commit BONG (A beat you to it) - Sync - Commit BONG (C) - Sync - Commit... Finally! What was I doing again?

    So VSS really is the worst application ever conceived for this... I was starting to think that maybe we were just using it wrong! (More ammo for switching to something better.)

    Maybe? Our team is pretty small, so it's rarely a problem for me. But if you have three people committing changes to the same file at the same time, someone is going to get stuck doing the merge regardless of what source control you're using.

  • (cs) in reply to boomzilla
    boomzilla:
    That's true, except that most (all?) DVCSes have figured out that Edit/Merge/Commit sucks. Much better to Edit/Commit/Merge with anonymous branches where necessary.

    This way, if we happen to work on the same file, we aren't forced to merge prior to committing our fixes. We can solve our problems in isolation, and then worry about combining them. Much simpler.

    CVCS could do this, of course, but generally don't AFAIK.

    Good point, and one that seems to go back to personal/team preference. I find it awfully silly to say "let's just merge later", but then again I like the Check-out/Edit/Check-in style myself. Not enough to put up a fight, though. I would probably complain about it, however.

  • Marvin the Martian (unregistered)

    As a mathematician sticking around philosophers, I have to say that the choice of word "dimension" is quite, very, bad.

    Dimensions should be independent things, that together span the whole set of possibilities; phase space or whatever you want to call it. But here it's a hierarchical ordering... They're levels of precision in a way. So why not "level" (to plug in the basic concept of low/high level languages) or "order"? That's what you're conveying anyway.

    That, or some far-fetched analogy (bits&bytes= bones or cells, files&filesystem= flesh or organs, mutations= .. ) to be worked out.

  • trtrwtf (unregistered) in reply to Abso
    Abso:
    But if you have three people committing changes to the same file at the same time, someone is going to get stuck doing the merge regardless of what source control you're using.

    Not if you use HAL_VS! It's wonderful! It even writes the code if you ask it nicely...

  • (cs) in reply to Alex Papadimoulis
    Alex Papadimoulis:
    boomzilla:
    Much better to Edit/Commit/Merge with anonymous branches where necessary.
    Good point, and one that seems to go back to personal/team preference. I find it awfully silly to say "let's just merge later", but then again I like the Check-out/Edit/Check-in style myself. Not enough to put up a fight, though. I would probably complain about it, however.
    "Awfully silly" is a lot better than PITA. Also, it makes it easier to make small commits yourself without worrying about having to merge with other changes.

    This isn't simply for the same files, either. It's quite possible that you rely on some behavior in a part of the system that you're not changing but that someone else is. Dealing with that mid stream just makes things more difficult than they need to be.

    It's also more obvious that you're actually merging since you actually use a merge command, as opposed to an update. I can't imagine working under a checkout style regime on anything of substance.

  • (cs) in reply to Part-time dev
    Part-time dev:
    That's still going to be annoying though, and can easily become very nasty with a large development team. Sync - Commit BONG (A beat you to it) - Sync - Commit BONG (C) - Sync - Commit... Finally! What was I doing again?

    This doesn't happen in practice. If you have that many people simultaneously working on the same files, you will have much bigger problems.

    Big teams are compartmentalized by module, so in effect it's just a bunch of small teams integrating modules together. These integration decisions (i.e. cross-over) are best determined ahead of time (and documented!), not at commit-time.

    Part-time dev:
    So VSS really is the worst application ever conceived for this... I was starting to think that maybe we were just using it wrong! (More ammo for switching to something better.)

    VSS is among the best of the worst I'd say. You haven't seen SCM hell until you've worked with configspecs. shudder

  • (cs) in reply to Abso
    Abso:
    Part-time dev:
    Abso:
    Part-time dev:
    With VSS, if she gets to the repository before me then I can't do anything. In other ones, it's extremely easy for either of us to accidentally wipe out the other's work. (Sync A, Sync B, Commit A, Commit B - A's commit just vanished!)

    That's really annoying.

    VSS may be that much of a WTF, but that doesn't mean that every non-distributed source control system is. In this specific situation, even CVS will refuse to commit B's changes until B updates/syncs again.
    Interesting, that's better than I'd been lead to believe.

    That's still going to be annoying though, and can easily become very nasty with a large development team. Sync - Commit BONG (A beat you to it) - Sync - Commit BONG (C) - Sync - Commit... Finally! What was I doing again?

    So VSS really is the worst application ever conceived for this... I was starting to think that maybe we were just using it wrong! (More ammo for switching to something better.)

    Maybe? Our team is pretty small, so it's rarely a problem for me. But if you have three people committing changes to the same file at the same time, someone is going to get stuck doing the merge regardless of what source control you're using.

    If you find it's happening a lot, and you're always working on the same file(s), then you might find it pays to do some refactoring.

    OTOH if it's because they always let three people loose at a programming job at once, and you're always fighting with each other over a commit, there's something iffy about the business process.

    I would have thought it rare for more than one person to need to work on the same file at once unless there's something really funny with your system configuration.

  • (cs) in reply to Alex Papadimoulis
    Alex Papadimoulis:
    Abso:
    So what's the "right" way to deal with features that aren't really assigned to a specific release?

    ...

    I guess the obvious fix would be to improve our release-defining, but that's not likely to actually happen.

    Your release process is a little broken, but only semantically. Instead of considering these "in progress" features that will go in a "TBD release", assign them to a specific release from the get-go. You can always move them around as things change.

    There's nothing wrong with using shelves - heck, create one for each feature if you really want - just don't create your "release candidate" builds from them. Merge changes into a release branch and then createa build from that to run through the gauntlet.

    Features are technically assigned to a release from the get go, it's just that we often move more of them than we leave in the release. Sometimes our manager has difficulty accepting that we can't put every feature ever into the next release, while also getting it out the door someday.

    And we do create a branch for each release before the first release candidate is built. It does end up with everything that makes it to the main branch before then, but it's generally only the dev branch that has works in progress. So I guess our system isn't all that far from "right" after all.

    Thanks for the advice!

  • (cs)
    Changes made to a branch are generally merged back in, but one thing must happen: at some point, the branch has to go away. If it doesn’t, it’s just a fork.

    Sometimes (actually most of the time in my experience) the branch represents a feature-frozen version that is being readied for a stable release... no merging there, though many changesets will be propagated "upstream" in the process.

  • Part-time dev (unregistered) in reply to Alex Papadimoulis
    Alex Papadimoulis:
    This has nothing to do with distributed vs centralized: if a source control system has a locking mechanism, then a "Check-out/Edit/Check-in" style of development is possible.
    My point here is that locking is actually a bad thing. We're not talking records in a DB table that update nearly-instantaneously, we're talking about items that need several minutes, if not hours of work to update.

    With regards to Edit/Merge/Commit:

    I linked to Joel Spolsky as he explained it much better than me. My post was pointing to an effect of the way distributed source control works that I particularly like.

    Distributed version control systems do not think in terms of bits/paths or file versions.

    They work in terms of deltas, and only deltas. Version [GUIDB] is Version [GUIDA] +this, -that.

    This means that you don't Edit/Merge/Commit - you Edit/Commit/Edit/Commit/Edit/Commit then PUSH and/or PULL - and the push/pull is when the merge happens.

    In both DCVS and CCVS there will of course be places where the merge needs a human to sort it out no matter how clever the system.

    However, in a distributed system, that human merging always happens at the end, not in the middle - so it doesn't break your chain of thought.

    This encourages small commits in DCVS, while the behaviour of CCVS actively discourages them.

    It's generally acknowledged that small focused commits are better - less work later if you find something has broken.

    Small focused commits in DVCS mean less work now and less work later, while in CCVS, small focused commits mean more work now.

    Alex Papadimoulis:
    Well, there's no good reason to fork (label, branch, or shelf) your code "offline"
    Of course there are - the same reasons you'd fork otherwise!

    Life doesn't stop just because somebody nuked the server - those projects still have deadlines!

  • Part-time dev (unregistered) in reply to Matt Westwood
    Matt Westwood:
    If you find it's happening a lot, and you're always working on the same file(s), then you might find it pays to do some refactoring.
    You're probably right about that.

    OTOH, refactoring is of course one of the tasks that makes that event rather likely!

  • (cs) in reply to trtrwtf
    trtrwtf:
    Abso:
    But if you have three people committing changes to the same file at the same time, someone is going to get stuck doing the merge regardless of what source control you're using.

    Not if you use HAL_VS! It's wonderful! It even writes the code if you ask it nicely...

    Yes, but it has a bug where it sometimes deletes vital functions.

  • Mr.'; Drop Database -- (unregistered) in reply to The "Real" WTF
    The "Real" WTF:
    L.:
    Over 9000 WTFP for using MySQL in a project where a database is really required.

    And .. I strongly disagree with putting php (5+) on the same level as ASP . I believe we can safely assume WTFP(ASP/vb)=10*WTFP(php).

    But if we're into the language bashing (which imo is not serious WTF .. except asp/vb/foxpro/cobol/...) why not go for a good old religion war and state the following:

    if($language.creator=='microsoft'){$language.wtfp=nanotime();}

    I think most languages are not inherently WTFP worthy.

    except for perl.

    it may be the only "real" language who's 99 bottles code is somewhat similar looking to brainf*ck oh and why is my comment spam?? what did i do wrong?

    Spam filters often mistake comments for spam if they contain a link or two. Or maybe the spam filter tries to filter out comments about Perl being bad because it's possible to write obfuscated code in it. There's a lot to dislike about Perl, but let's stick to real reasons, please? :)
    my @arr = (10, 20); # Array variables are prefixed with at signs
    print $arr[0]."\n"; # Except when they're not
    push(@arr, 30); # Some functions can take arrays as parameters
    threeArgFn(@arr); # Others will "unpack" the array's elements into separate arguments
    my %args = ( # A hashmap, but Perl calls it a "hash" for no reason
    searchTerm => $cgi->param("s"),
    category => $cgi->param("cat"),
    ); # If no query-string parameters were passed, %args is now equal to ("searchTerm" => "category").

    This is due to the unpacking behaviour and because the double arrow is treated almost the same as a comma

    Arrays and "hashes" can only contain scalars. Perl provides references, which wrap things up as scalars...

    $args{subCats} = [1, 2, 3]; # Square brackets create an array reference push(@{ $args{subCats} }, 4); # But now you must dereference it every time you want to perform a common operation on it eval { # "try" die("exception"); # "throw" }; if ($@) { # "catch" }

    Etc...

  • (cs) in reply to Abso
    Abso:
    So what's the "right" way to deal with features that aren't really assigned to a specific release? At any given time, the project I'm working on has several half-finished features which will be shipped either in the next release (if they're finished in time) or in the one after.

    How we currently handle this is similar to the anti-pattern Alex describes, with "dev" and "main" branches/shelves. But within each of those, testing and such is done on immutable, labelled builds. We seem to avoid the Jenga pattern most of the time by associating commits with bugs and merging all the commits for a given bug at the same time.

    The way it's handled with git: create a new branch for every new feature, and every bug report/ticket number.

    While working on a given branch you can always pull in from other sources, and when it's done, merge it back into whatever release branch you want.

    Git makes this all very easy.

  • Luiz Felipe (unregistered) in reply to The poop... of DOOM!
    The poop... of DOOM!:
    Paratus:
    The poop... of DOOM!:
    The "Real" WTF:
    6000 - using ACCESS as a database
    7000 WTFP for using VB 7000 WTFP for using PHP

    VB and PHP are certainly RWTFs, but there's no way that they're worse than using Access.

    He said using Access as a database, so you can combine that.

    A PHP application calling an Access database would result in 13000 WTFP (and a developer who's been committed to a mental hospital)

    20000 Using firebird/interbase (its worse than access).

    Access is a little db for simple use, its not WTF to use in correct situation, but its easy to abuse. There nothing wrong in using simple rdbms.

    Firebird is crap, access can sustain more records and users.

  • (cs) in reply to David C.
    David C.:
    The most important thing, IMO, about any VCS is its ability to make merges as painless as possible.

    And as I said in my comment, every DVCS has had to solve this in one way or another in order to be at all viable.

    David C.:
    Without (hopefully) sounding like an advertisement, I've found that the commercial product, Perforce, is the only one that gets this right. The server tracks a file's entire revision history, through all of its permutations of branches (and there may be hundreds, for some key files belonging to large projects.) When you need to do a merge (which they call "integrate"), the system uses the version history to find the common ancestor between your file and the one you're merging in (even if this common ancestor is separated by dozens of intermediate branches.) It then does a 3-way diff on the files (yours, the one you're merging in, and the common ancestor), presenting all conflicts as all three versions of the conflicting lines. Sections where only one source (yours or the merged-in version) differ from the ancestor are automatically merged without any user intervention. (You can, of course, still review the merged changes and fix any mistakes, which still happen occasionally.)

    You not only just described how Git seems to work (and it does this in seconds, most often in under a second), but you also described how recent builds of SVN with merge-tracking work (it just ended up taking half an hour for it to finish merging, and that's not counting any manual conflict resolution).

    David C.:
    With this system, you can actually merge any branch into any other branch, not just into direct parent/child branches. The server will track the operations and make the right thing happen, even if the branch/merge history starts looking like a tangled ball of rubber bands.

    A little thought will show that the "tangle ball of rubber bands" is precisely the problem DVCS was invented to solve. Maybe an illustration is in order: Linus' role in Linux these days is essentially merging patches from other people. To do this, he pulls and merges from about a hundred top-level contributors (generally subsystem maintainers), who themselves pull and merge, or apply patches from, people lower down.

    There does have to be a common ancestor for the repository itself, but beyond that, everything else typically just works.

    David C.:
    It's not a perfect system. You still sometimes have to manually merge files, but they manage to automate all of the easy situations, so you only have to manually merge the really nasty changes that no system is likely to be able to handle automatically.

    And for those manual merges -- few and far between though they may be -- I have my Git configured to launch kdiff3, so I get a nice graphical 3-way diff I can edit.

    I can believe you're not astroturfing for Perforce, but if you're going to claim that it's the only one which gets merging right, you really need to try Git. It's free (and open source), requires very little set-up (a server is optional, not required), and there are free books full of documentation out there.

  • (cs) in reply to Part-time dev
    Part-time dev:
    My point here is that locking is actually a bad thing.

    There's nothing wrong with locking. You just don't like it.

    That's fine... and I don't like not locking; it's just a matter of team preference.

    Part-time dev:
    Distributed version control systems do not think in terms of bits/paths or file versions.

    They work in terms of deltas, and only deltas. Version [GUIDB] is Version [GUIDA] +this, -that.

    Yes yes, and time is really just discrete snapshots of the universe as it moves in a direction towards greater entropy. But realistically, we need watches and timezones... and realistically, files are bits/paths and (in revision control) have a history of changes.

    I understand how directed acyclical graphs work and that a file can be multi-headed and have multiple "current" versions. But what does that mean in practice? You can't get the file without resolving the merge... thus it's effectively just a reverse lock.

    Again, nothing new here. Except a lot of confusion for the developers who have a hard enough time grasping the 3-dimensions of revision control.

    Part-time dev:
    It's generally acknowledged that small focused commits are better - less work later if you find something has broken.

    A commit should represent a reasonable attempt at implementing a specific task. Thus, it's the tasks that should be kept small, not the commits. This is an important distinction -- if tasks are big but commits are kept small, then commits and tasks become further separated.

  • eMBee (unregistered) in reply to annie the moose
    annie the moose:
    You're doing it wrong!

    C:\VersionControl MyProg.201109060900.c MyProg.201109060904.c MyProg.201109060915.c

    It's so easy.

    i liked VMS built in versioning: MyProg.c;1 MyProg.c;2 MyProg.c;3

  • Anon (unregistered)

    Trwtf is 3857 words?

  • (cs)

    Backup a directory tree and reload and you get a clone of what you started with. Put that directory tree into version control, and check it out again, and it's nowhere near a clone.

    I'm doing my first Drupal web site, and looked into Subversion and Git. I was horrified at how much Version Control does not track:

    File ownership.

    File access; e.g. which pieces must be writable by others?

    File timestamps; forcing modtime=NOW is convenient for 'make' but there's no option.

    Database contents; e.g. MySQL. Try to put a MySQL backup into version control and you get one diff line per table.

    My Linux (case sensitive) system had two files, named "Install.text" and "install.text"; the Subversion repository was on OS X (case insensitive). An svn checkout on OS X confused svn terribly. Not sure whether the repository or the checkout was blown.

    As far as I can see, "Version Control" means "Source Code Version Control", and it is not yet ready for Web 3.0.

  • L. (unregistered) in reply to Luiz Felipe
    Luiz Felipe:
    The poop... of DOOM!:
    Paratus:
    The poop... of DOOM!:
    The "Real" WTF:
    6000 - using ACCESS as a database
    7000 WTFP for using VB 7000 WTFP for using PHP

    VB and PHP are certainly RWTFs, but there's no way that they're worse than using Access.

    He said using Access as a database, so you can combine that.

    A PHP application calling an Access database would result in 13000 WTFP (and a developer who's been committed to a mental hospital)

    20000 Using firebird/interbase (its worse than access).

    Access is a little db for simple use, its not WTF to use in correct situation, but its easy to abuse. There nothing wrong in using simple rdbms.

    Firebird is crap, access can sustain more records and users.

    Access is total crap, there is no valid reason to use Access instead of MySQL (which already is a simple rdbms that sucks a lot). I do agree that for very simple and basic db use, one can stick to mySQL or other half-assed dbms's, but it is also clear that a LOT of these cases are misunderstood.

    I.E. developpers who know nothing about SQL think it's only good to store objects in a table, thus take no advantage of the tool and thus design an application that uses little or no features which IS a WTF in itself, for using the wrong tools for the job.

    I'm not a DBA and I'm quite surprised to see how much other devs have no clue about SQL in general (yes, all of you who use MySQL can be included in this if you think innoDB is strictly ACID compliant for example, etc.) - in the end, know your tools and use them right, also remember some tools are USELESS for some projects, there is NO using them right (like access for anything or MySQL for complex applications).

    In the end, the only good ones are and will be those who try to do better every single time, spend time reading and learning all they can (and posting their own fails on tdwtf for our enjoyment).

  • The Poop... of DOOM! (unregistered) in reply to snoofle
    snoofle:
    QJo:
    30 hours weekly unpaid overtime for a long time
    We've all done that early on in our careers. Once burned, twice shy. Once you learn to see it coming you can make for the exit long before you're exhausted (who wants to show up at a new job the first day - needing a vacation?)

    It's the fools who keep doing it over and over that cause management to continue this practice of abusing and then discarding employees.

    Been there, done that (showing up at a new job the first day - needing a vacation, due to having been burnt out like that by the previous job). Also had a heavy flu that first week, cause some idiot colleague at the job before found it necessary to come into work while being seriously ill, only to do nothing but moan while hanging over the kitchen sink all day long, every day.

    Another real WTF: Not staying at home when you're too ill to go to work.

  • Gizz (unregistered)

    ...source code control was done on floppy disks. The release code was written to a floppy (5.25") and write protected and put in the fire safe. To comply with BS5750, we also printed the source out on a huge sheaf of paper. As a backup. Happy days.

  • Anonymous Cow-Herd (unregistered) in reply to Part-time dev
    Part-time dev:
    That's still going to be annoying though, and can easily become very nasty with a large development team. Sync - Commit BONG (A beat you to it) - Sync - Commit BONG (C) - Sync - Commit... Finally! What was I doing again?
    Can you even do that? Last time I had to bring a ten-foot pole near to a VSS repo, it was a case of "Sorry, you can't edit this file, A has checked it out already." Maybe that was a result of us not realizing that SourceSafe could do it in the more expected way (my excuse was that I'd never used it before, my senior partner had used it for some 10 years so he doesn't really have one).
  • Ru (unregistered) in reply to AndyCanfield
    AndyCanfield:
    Backup a directory tree and reload and you get a clone of what you started with. Put that directory tree into version control, and check it out again, and it's nowhere near a clone.

    The article and indeed preceding comments mentioned this very fact. Do try to keep up.

    AndyCanfield:
    As far as I can see, "Version Control" means "Source Code Version Control", and it is not yet ready for Web 3.0.

    Web 3.0? Now is that a geometric increase in bullshit, or an arithmetic one? I was under the impression that the Next Big Fad was finally implementing the semantic web.

    What it certainly seems to be heading towards is a complete reimplementation of an operating system using nothing but javascript and HTML. In this situation, I'd expect file metadata to be in its header in some suitable form, and therefore trivially amenable to source control.

  • Ru (unregistered)

    Not even Microsoft use it internally. Haven't done for years. They've had their own perforce-based thing for a little while (which was awful) but nowadays they've eaten their own dogfood and moved to TFS.

    Given that there are lots of lovely tools for migrating out of awful old control systems that are so atrocious even their creators prefer never to look at them ever again, TRWTF would presumably be carrying on using it.

  • Part time dev (unregistered) in reply to Alex Papadimoulis
    Alex Papadimoulis:
    Part-time dev:
    Distributed version control systems do not think in terms of bits/paths or file versions.

    They work in terms of deltas, and only deltas. Version [GUIDB] is Version [GUIDA] +this, -that.

    Yes yes, and time is really just discrete snapshots of the universe as it moves in a direction towards greater entropy. But realistically, we need watches and timezones... and realistically, files are bits/paths and (in revision control) have a history of changes.
    Actually, files that are part of an application cannot have a history of changes, because they are almost always inter-dependant.

    The file myobject.h v5 probably won't be compatible with myobject.cpp v5.

    The repository needs to be able to tell you the state of the entire 'fork' at each point in time, so you can pull out myobject.h and myobject.cpp as they both were at a specific point in time.

    The core thing is that you shouldn't think in terms of files, you should be thinking in terms of changes made to the whole 'fork'.

    Of course, this isn't specific to DCVS against CCVS.

    However, it was thinking in terms of 'files' rather than 'fork state' that got VSS and CVS into that mess.

    Alex Papadimoulis:
    Part-time dev:
    It's generally acknowledged that small focused commits are better - less work later if you find something has broken.

    A commit should represent a reasonable attempt at implementing a specific task. Thus, it's the tasks that should be kept small, not the commits. This is an important distinction -- if tasks are big but commits are kept small, then commits and tasks become further separated.

    Yes, that is a very good point.

    However, most tasks are divisible - and generally they are easily divisible beyond what a reasonable manager should need to ask a programmer.

    "Fix Bug A" is generally a reasonable request. However, once the programmer gets into the code, they'll probably find that there are several disparate elements that cause the bug that all need to be fixed to properly solve the bug.

    So as "Fix Bug A" is now known to actually be several smaller tasks, the programmer should provide these as separate commits.

    This is something DCVS makes easy - the programmer doesn't need to ask anyone, doesn't risk losing a lock on the necessary files, or need to wait for a lock on another file once they realise it's important, and doesn't need to merge anything (introducing unknown elements) until the 'big' task of "Fix Bug A" is done.

    So DCVS encourages good practice, while CCVS actively discourages it.

  • Part time dev (unregistered) in reply to Ru
    Ru:
    Not even Microsoft use it internally. Haven't done for years. They've had their own perforce-based thing for a little while (which was awful) but nowadays they've eaten their own dogfood and moved to TFS.

    Given that there are lots of lovely tools for migrating out of awful old control systems that are so atrocious even their creators prefer never to look at them ever again, TRWTF would presumably be carrying on using it.

    Yes, I'm stuck with it for two projects.

    Rational ClearCase is used for others, that's better as it does at least have atomic commits, but it's not much of an improvement and rather complex to use.

    Manglement appear to think it would be too difficult to migrate to anything else.

  • Anonymous Cow-Herd (unregistered) in reply to L.
    L.:
    (yes, all of you who use MySQL can be included in this if you think innoDB is strictly ACID compliant for example, etc.)
    I guess you're including MySQL and InnoBASE in this, since they seem to think InnoDB is ACID-compliant. Eight of the page 1 results for "innodb acid compliant" claim that it is, the other two are a bug report where someone claims that it isn't only to find they're wrong (and by "bug report", I mean "rant that ended up in the bug tracker"), and a MySQL vs PostgreSQL comparison which claims it but doesn't substantiate it. So, we could do with an explanation of why it's not the case, and those external anyone-can-edit sources could do with updating with said same.
  • L. (unregistered) in reply to Anonymous Cow-Herd
    Anonymous Cow-Herd:
    L.:
    (yes, all of you who use MySQL can be included in this if you think innoDB is strictly ACID compliant for example, etc.)
    I guess you're including MySQL and InnoBASE in this, since they seem to think InnoDB is ACID-compliant. Eight of the page 1 results for "innodb acid compliant" claim that it is, the other two are a bug report where someone claims that it isn't only to find they're wrong (and by "bug report", I mean "rant that ended up in the bug tracker"), and a MySQL vs PostgreSQL comparison which claims it but doesn't substantiate it. So, we could do with an explanation of why it's not the case, and those external anyone-can-edit sources could do with updating with said same.

    The only ones claiming that MySQL is acid compliant, is MySQL / Oracle themselves.

    ACID : 'C' compliance means any transaction will bring the database from a consistent state to a consistent state, both of which of course respect every single rule implemented in the system.

    Due to the way MySQL treats CASCADE, triggers will NOT be fired on cascade operations, which violates the consistency rule by making a cascaded action bypass triggers which inherently contain consistency rules.

    On the same topic, MSSQL's trigger nesting is limited to 32 levels, which implies that in the event that a 33rd trigger should have been fired, the database will be left in an inconsistent state, thus breaking 'C' compliance aswell.

    On the exact same topic, PostgreSQL's trigger nesting is NOT limited and their doc states developers should be careful not to create infinite trigger loops.

    I do not know Oracle a lot but I would expect it to do the same as Postgres, considering how both are extremely focused on SQL standards, consistency and reliability.

    Yes, most people don't care and most people don't notice and most people don't quite understand what ACID means and buy the sticker wether it's true or not, and that is why you can read everywhere that InnoDB is fine - written by people who don't use triggers/cascades/both (at least I hope so ...consequences would be interesting).

    On the same ACID topic, for those who are interested, the 'I' is a very interesting beast ;)

  • Ru (unregistered) in reply to Anonymous Cow-Herd
    Anonymous Cow-Herd:
    L.:
    (yes, all of you who use MySQL can be included in this if you think innoDB is strictly ACID compliant for example, etc.)
    I guess you're including MySQL and InnoBASE in this, since they seem to think InnoDB is ACID-compliant. Eight of the page 1 results for "innodb acid compliant" claim that it is, the other two are a bug report where someone claims that it isn't only to find they're wrong (and by "bug report", I mean "rant that ended up in the bug tracker"), and a MySQL vs PostgreSQL comparison which claims it but doesn't substantiate it. So, we could do with an explanation of why it's not the case, and those external anyone-can-edit sources could do with updating with said same.
    The Wikipedia page seems to suggest that using a BDB backend makes MySQL ACID capable. First I've ever heard of that, though.

    Anyhoo, you could listen to http://nosql.mypopescu.com/post/1085685966/mysql-is-not-acid-compliant, if you're bored. Might be a bit outdated nowadays. Some of you may find it familiar...

  • Nagesh (unregistered) in reply to letseatlunch
    letseatlunch:
    is it just me or is any one else feel that they must be in the twilight zone because this was posted before 8:30?
    8:30 is when tdwtf artical is tipicaly publish here in Hyderabad. In U.S. time this is being more aproximate 3:00pm?
  • QJo (unregistered) in reply to The Poop... of DOOM!
    The Poop... of DOOM!:
    snoofle:
    QJo:
    30 hours weekly unpaid overtime for a long time
    We've all done that early on in our careers. Once burned, twice shy. Once you learn to see it coming you can make for the exit long before you're exhausted (who wants to show up at a new job the first day - needing a vacation?)

    It's the fools who keep doing it over and over that cause management to continue this practice of abusing and then discarding employees.

    Been there, done that (showing up at a new job the first day - needing a vacation, due to having been burnt out like that by the previous job). Also had a heavy flu that first week, cause some idiot colleague at the job before found it necessary to come into work while being seriously ill, only to do nothing but moan while hanging over the kitchen sink all day long, every day.

    Another real WTF: Not staying at home when you're too ill to go to work.

    Especially nowadays when there exists (a) wireless internet and (b) neat little tables which can extend over a sickbed that will easily accommodate a laptop.

    Send the email, tell them "WFB".

  • QJo (unregistered) in reply to Gizz
    Gizz:
    ...source code control was done on floppy disks. The release code was written to a floppy (5.25") and write protected and put in the fire safe. To comply with BS5750, we also printed the source out on a huge sheaf of paper. As a backup. Happy days.

    IIRC "write protect" was performed by munching a slot out of the cardboard which formed the envelope for the disc. You could un-write-protect it by sticking a pic of opaque tape (insulating tape, gaffa tape, wotever) over the notch.

  • The Daily What The Comment (unregistered) in reply to Nagesh
    Nagesh:
    letseatlunch:
    is it just me or is any one else feel that they must be in the twilight zone because this was posted before 8:30?
    8:30 is when tdwtf artical is tipicaly publish here in Hyderabad. In U.S. time this is being more aproximate 3:00pm?
    Hey, Nagesh, do you ever wonder what all those red squiggly lines under your comments means?

    CAPTCHA: incassum -- incassum you didn't know, it means you are barely literate.

  • (cs)

    The most important thing that distributed version control (DVCS) bring is the third workflow (besides Check-Out (Lock) - Edit - Check-in (Unlock) and Edit - Merge (Update) - Commit mentioned in article):

    • Edit - Commit - Merge

    See e.g. "Understanding Version Control" by Eric S. Raymond, or "Version Control by Example" by Eric Sink.

  • (cs) in reply to The Daily What The Comment
    The Daily What The Comment:
    Nagesh:
    letseatlunch:
    is it just me or is any one else feel that they must be in the twilight zone because this was posted before 8:30?
    8:30 is when tdwtf artical is tipicaly publish here in Hyderabad. In U.S. time this is being more aproximate 3:00pm?
    Hey, Nagesh, do you ever wonder what all those red squiggly lines under your comments means?

    CAPTCHA: incassum -- incassum you didn't know, it means you are barely literate.

    I believe it's a way Nagesh uses to measure the trolliness of his comments.

    There's a certain magical amount of total length of red squigglies that, when achieved, captures the most flames. To short only snags the hardcore spelling snobs, and too long only gets a few morans.

  • Jupiter (unregistered) in reply to L.
    L.:
    Anonymous Cow-Herd:
    L.:
    (yes, all of you who use MySQL can be included in this if you think innoDB is strictly ACID compliant for example, etc.)
    I guess you're including MySQL and InnoBASE in this, since they seem to think InnoDB is ACID-compliant. Eight of the page 1 results for "innodb acid compliant" claim that it is, the other two are a bug report where someone claims that it isn't only to find they're wrong (and by "bug report", I mean "rant that ended up in the bug tracker"), and a MySQL vs PostgreSQL comparison which claims it but doesn't substantiate it. So, we could do with an explanation of why it's not the case, and those external anyone-can-edit sources could do with updating with said same.

    The only ones claiming that MySQL is acid compliant, is MySQL / Oracle themselves.

    ACID : 'C' compliance means any transaction will bring the database from a consistent state to a consistent state, both of which of course respect every single rule implemented in the system.

    Due to the way MySQL treats CASCADE, triggers will NOT be fired on cascade operations, which violates the consistency rule by making a cascaded action bypass triggers which inherently contain consistency rules.

    On the same topic, MSSQL's trigger nesting is limited to 32 levels, which implies that in the event that a 33rd trigger should have been fired, the database will be left in an inconsistent state, thus breaking 'C' compliance aswell.

    On the exact same topic, PostgreSQL's trigger nesting is NOT limited and their doc states developers should be careful not to create infinite trigger loops.

    I do not know Oracle a lot but I would expect it to do the same as Postgres, considering how both are extremely focused on SQL standards, consistency and reliability.

    Yes, most people don't care and most people don't notice and most people don't quite understand what ACID means and buy the sticker wether it's true or not, and that is why you can read everywhere that InnoDB is fine - written by people who don't use triggers/cascades/both (at least I hope so ...consequences would be interesting).

    On the same ACID topic, for those who are interested, the 'I' is a very interesting beast ;)

    you must be fun at dinner parties...

  • (cs) in reply to David C.
    David C.:
    The most important thing, IMO, about any VCS is its ability to make merges as painless as possible.
    I wholeheartily agree. It is merging that has to be easy, not only branching.
    David C.:
    Many offer very little. [...]

    Unfortunately, most systems are very bad at this, and every free system I've used is included.

    It looks like you didn't use any of modern free (open source) version control systems: Git, Marcurial, Bazaar, etc.
    David C.:
    Without (hopefully) sounding like an advertisement, I've found that the commercial product, Perforce, is the only one that gets this right. The server tracks a file's entire revision history, through all of its permutations of branches (and there may be hundreds, for some key files belonging to large projects.) When you need to do a merge (which they call "integrate"), the system uses the version history to find the common ancestor between your file and the one you're merging in (even if this common ancestor is separated by dozens of intermediate branches.) It then does a 3-way diff on the files (yours, the one you're merging in, and the common ancestor), presenting all conflicts as all three versions of the conflicting lines. Sections where only one source (yours or the merged-in version) differ from the ancestor are automatically merged without any user intervention. (You can, of course, still review the merged changes and fix any mistakes, which still happen occasionally.)
    That is exactly what e.g. Git does (and any version control system that implements merge tracking does), though common ancestor is about version of whole project, not individual file.

    I don't know how Perforce does it, and if it does it, but with default "recursive" merge strategy (merge algorithm) that Git uses it can deal with case where there are multiple common ancestors, like e.g. in the case of so called criss-cross merge.

    David C.:
    With this system (Perforce), you can actually merge any branch into any other branch, not just into direct parent/child branches. The server will track the operations and make the right thing happen, even if the branch/merge history starts looking like a tangled ball of rubber bands.

    Same with Git. And all that it requires is for merge commits to remember all its parents...

    Nb. with Git you can even merge unrelated branches, e.g. incorporating code which was developed independently as now a part of larger project.

  • L. (unregistered) in reply to Ru

    Unfortunately the wikipedia page is full of shit, unsupported pro-mysql crap and stuff (like google uses mysql instead of 'some very minor google apps use mysql'.. Etc.) and I've had the displeasure to witness the bias around it - on the other hand ... wikipedia is mysql based. (and I doubt they even have a single dba in their dev team, when you see the number of dead links to paragraphs long deleted ;) ).

    Never tried the BDB or the other one they say should be ACID .. .but if it's as ACID as innodb (which they used to say was acid until i modified it --), ... meh

    Nobody talks about it because dba's who see that as a problem are dba's used to better and stricter rdbms's (Oracle,postgreSQL,...) who would at best use mysql as a "cheap" solution, if at all.

  • QJo (unregistered) in reply to L.
    L.:
    Unfortunately the wikipedia page is full of shit, unsupported pro-mysql crap and stuff (like google uses mysql instead of 'some very minor google apps use mysql'.. Etc.) and I've had the displeasure to witness the bias around it - on the other hand ... wikipedia is mysql based. (and I doubt they even have a single dba in their dev team, when you see the number of dead links to paragraphs long deleted ;) ).

    Never tried the BDB or the other one they say should be ACID .. .but if it's as ACID as innodb (which they used to say was acid until i modified it --), ... meh

    Nobody talks about it because dba's who see that as a problem are dba's used to better and stricter rdbms's (Oracle,postgreSQL,...) who would at best use mysql as a "cheap" solution, if at all.

    So, we'll be able to see your replacement Wikipedia article on this subject when?

    Understanding its limitations, I find Wikipedia a huge asset. I believe it's a Good Thing to correct inaccuracies and mistakes as soon as you see them. Although arguments over matters of opinion based on personal preferences are probably best kept away from, as we all have far too much of that sort of thing to do here instead.

Leave a comment on “Source Control Done Right”

Log In or post as a guest

Replying to comment #:

« Return to Article