The important part is that you have an alternate copy of the data, preferably offsite. It's also preferable that you can restore versions of the data far enough back in time that you notice any possible corruption before the last good copy is overwritten by a more recent (corrupt) backup.
And even very nifty file systems like ZFS are not completely impervious to certain kinds of hardware issues.
"Shortly after, they'd copied over the files from master, pasted them in the working folder, and force pushed with a fake name, erasing all history on said branch in a terrible attempt to cover their tracks.
Remember how Clarissa had wiped her MacBook the night before? If Toni hadn't cloned the entire repository to her ThinkPad for Thanksgiving tinkering, Clarissa would've lost four to eight months of work."
Umm, you can limit privileges in GIT. Why can anyone tinker with master? That is the real WTF. This should not have occurred if the most trivial of permissions were utilized correctly. Only our CM team where I work can merge into master or do anything with it really. Maybe 4 or 5 people out of 50 that work in the REPO.
This whole article is TRWTF -- local branches for local development, no need to push every little change every time. And how TF were these people working WITHOUT having a local clone of at least the dev branch on their machines? Add in the ability to undo those arbitrary changes via the REF_LOG, there was never a REAL problem with the code in source control, just its users.
When the story reached the "force pushed to master" bit, I immediately wanted to know why that was even possible. Score a point for part of management being competent, lose a point for whoever set up the repos/deploy process in the first place not being worth their salt.
We have a repo in our shop that only one system user is allowed push to. Once or twice a year the permissions have to be reset because some "helpful" manager decided their team needed push access to "everything" and a user has pushed something to the repo.
The real problem with mercurial isn't rewriting history. Merging trunk into your branch before committing to the trunk is basically the same thing as a rebase, and results in fast-forward merges. The thing I like better about git is the way it handles branches. They feel like a much more permanent structure in mercurial. In git, you can just create a bunch and throw them away. It feels much more like the way I actually use branches.
Mercurial is entirely capable of rewriting history. It just makes it very explicit - hg rebase, hg strip, etc. You don't get history rewriting by accident. Another big difference between Git and Mercurial in this respect is that in Git if a changeset isn't an ancestor of a branch, at some point the changeset may be deleted automatically. In Mercurial, changesets are never deleted unless explicitly deleted.
Mercurial also has a much more advanced extension - changeset evolution - which in theory should result in avoiding the major issue with history rewriting (people having out-of-date or duplicate changesets in their repositories). However, it's a hard problem which has been being worked on for a long time and I'm personally still not confident in using it.
Git branches are almost identical to Mercurial bookmarks. Mercurial also has additional branching options, including anonymous branching and named branches (I'm personally a fan of named branches, but many people aren't).
CS degrees are all but completely useless for actually working in the field, so I'd hope that fewer men and women alike are getting them. Software Engineering and Information Technology are the degrees that universities (and trade schools) should have, but don't, because computer science sounds so much more prestigious.
My CS degree taught me to create programming languages, write compilers and OSes, and design standard libraries, none of which I actually ever did (except on a private whim). They basically instill grads with the idea that they have to reinvent the wheel and create inner-systems, because using existing libraries is utterly anathema to the CS curriculum. CS degrees are TDWTF fillers.
In all but the top universities, CS should have been replaced with SE more than 20 years ago, but it keeps trundling on, another unkillable dinosaur of academia.
A CS degree is not vocational training, true. But not knowing about compiler construction, you might think "hey, I need something quick and simple for my web pages, how hard can it be?" and end up inventing PHP.
Some groups really prefer to keep things out in the open, at least on the group level.
Yes. Public commits palliate the "guy in the room syndrome" identified by Jim McCarthy. (http://apcmag.com/foobar_blog.htm/)
That problem was identified in a time when most developers likely worked at the same site, but now we've got DVCSes leading people to believe that forking the repo and going off on uncontrolled tangents is a good thing.
I'm working on a project right now, in fact, where one of the key difficulties we overcame is that the bulk of the code base was just blindly cloned from an open source project then hacked directly on, with no attention given to the possibility of merging the upstream trunk again at some point. Now we're 15 months down the line and it took several people a few days of our recent winter vacation to do the detective work and grunt coding to merge the upstream changes over those 15 months back into our fork. And that work is going to recur to a lesser extent until we can extract our local changes from the base in such a way that a simple diff-and-patch is likely to work without hand-editing.
If, by contrast, this project I'm working on was forced to work within the upstream project, it would have been architected differently, so that there never would have been any difficulty merging in trunk changes as needed.
I am a Fossil fan, which like Mercurial, actively tries to prevent history modification. From what I can tell, Fossil is even more strict than Mercurial. It's basically a DVCS that operates like Subversion, day-to-day. So, it keeps all your remote developers on-track by sending all commits back to the repo they cloned from, by default. Thus, no guy-in-the-room.
Developers can still hide what they're doing if they work at it hard enough, but when the default is to share what you're working on, it makes hiding difficult.
TRWTF is doing a full hard drive wipe to reinstall OS X. Even with no internal optical drive, you can install from an external optical drive, so that isn't a valid excuse.
Maybe if the filesystem was sufficiently hosed (the HFS equivalent of fsck is based on Disk First Aid from ~25 years ago, which is notoriously stupid), in which case someone should invest in a copy of DiskWarrior. But you can still boot from an external drive or install disk to get in deep enough to a shell prompt to cp -r everything to an external drive.
Knowing how to create a compiler is incredibly valuable for real-world work, even if you never have to create a compiler, because it teaches you about parsing, about trees and visitors, and about code generation, all of which are things that come up over and over again in real-world problems.
A CS grad is more likely to try to roll their own rather than using one of many existing frameworks/platforms, because they think they know how to make a compiler.
And in my experience 99% of the CS grads, I have worked with would create mess that would make PHP look incredibly good.
CS teaches theory and the expansion of theories, not the practical application of the theory and that is a big difference.
At my last job, I was the only man on a team of 5. Also, our manager was a woman (and the best programmer of all of us). Really comfortable team to work on.
("What do you mean, 'last job?' Why aren't you still there?")
Good question. Basically, because the director over our manager was a liar. But that's a whole other WTF... I would note that 4 out of the 6 of us have moved on. The two remaining will soon, I suppose...
I'm currently a Computer Science student in Australia. My CS degree has taught me none of those things, the lowest level practical application I have encountered is the use of Unix system calls. That was in an operating systems unit where students could elect to write a shell or some sort of client server application (http/ftp/develop own protocol).
Apart from the academic stuff (research/science methodology) the majority of my degree has focused on design, algorithms and data-structures.
One of my early units, Foundations of Programming, taught basic C, top-down design, algorithms and testing with a focus on low coupling/high cohesion/module re-usability. That lecturer also strongly enforced good coding style, proper variable name selection and informative code commenting. It was one of the best units I've taken.
We've never been told to throw away the standard libraries (they're fast and well tested), just to encapsulate the STL structures because they come with all the baggage of design by committee. I was required to implement my own Vector and Binary Search Tree although that was simply to understand how they worked.
I certainly don't feel the content of my degree is useless. At my university it is the Games Tech (double major with CS) students that are in demand in industry because they make shit-hot programmers.
Git users are unusually obsessed with rewriting history; apparently a clean log is more important than recording what actually happened. Fuck-ups should be kept as evidence and learning material. Accidental private key commits aside, of course.
Anyway, who wipes their computer for a reinstall without taking a backup first?
Then your degree is misnamed. Computer Science and Software Development are different things.
Computer Science is a largely academic discipline; Edsger Dijkstra is credited with saying, "Computer Science is no more about computers than astronomy is about telescopes." Much of proper Computer Science is a branch of academic mathematics, yet you will not find too many practicing software developers frequently writing software documentation in TeX purely so they can typeset the mathematics necessary to properly describe their design.
Incidentally, I also take issue with the term "Software Engineering." Engineering is applied physics; software development is rarely applied physics. I don't care if you call it Software Development or Computer Programming or Hacking or anything else, but I will resist if you try to pretend that what we do is "engineering." This isn't just pedantry: this very site is proof enough that our approach to robustness is vastly different from that of what we used to call the engineering disciplines before that term started to become diluted. A proper engineer can produce a physical equation proving the design correct; a "software engineer" rarely can, irrespective of skill and training, because software is simply too far removed from the underlying physics governing the computer's operation.
This is not an excuse to not be an adult, or an invitation to do the right things wrong, or to do the wrong things right. It is simply an abandonment of pretense.
Incidentally, I also take issue with the term "Software Engineering." Engineering is applied physics; software development is rarely applied physics. I don't care if you call it Software Development or Computer Programming or Hacking or anything else, but I will resist if you try to pretend that what we do is "engineering."
http://idngoal.com | http://kimpoker.com
Engineering is applied physics; software development is rarely applied physics
Yeah, look, some actual engineers read these forums too, you know. The discussion is a lot more complex than you can fit in a sentence, not least because most engineers end up writing software at least part of the time. They just call it "a spreadsheet" or "this R script" or something other than "a program". Is a program that does an engineering task "applied physics"? How about if the task is does isn't physics, or even well-defined and is definitely not tractable or calculable?
Engineering, like software development, is as much art as science. So is much science.