• my name (unregistered)

    Does anyone experience the internal server errors as well?

  • Matt (unregistered)

    This would be a good time to use digital forensics tools to examine his HDD and recover those files, right?

  • not a robot (unregistered)

    I was half expecting the backup directory to be nfs/cifs mounted with world writable permissions.

  • Zach (unregistered)

    That is everyone's fear, that they'll accidentally do rm -rf /*/ or something like that

  • snoofle (unregistered) in reply to Zach

    I HAVE done exactly that (uninitialized variable at the top of the path, leading to wiped out file systems), but never without backups.

    One good way to prevent this sort of thing is to create a template of the target directory in a safe place with a temporary prefix:

    .../MyDir/Temp/${TheDir}/.

    and then test. This way, you catch those nasty little "rm <null>/*" type errors without doing any damage. Then once it's debugged, you remove the safety prefix.

  • (nodebb)

    Sounds like Milton's been there a long time. And it sounds like there were 1,000 signs that he is an idiot that were happily ignored.

    It also sounds like Milton doesn't understand that you need to write scripts for people that know less than you. Oh, wait..... there's nobody on that list!

  • sunnyboy (unregistered)

    This is a nice WTF entry. Milton basically WTF'd himself out of existence. Yay.

  • misterdoubt (unregistered)

    The support team took the stance that if you were technical enough to be running Linux and writing shell scripts, you were technical enough to set up your own backup solution.

    In other words... the support team couldn't be arsed.

    Imagine how much Milton -- the one who "knew the processing pipeline better than anyone" -- could have contributed if the company supported collaboration and encouraged good practices...

  • Dave (unregistered) in reply to misterdoubt

    And tested backups. It's not actually a backup until you've tested it and seen that you can restore from it. I once had a manager who wanted us to create a Windows log on script that would randomly - about 1% frequency - format the disk and restore from last night's backup. The only reason we didn't implement it was the time it took to copy everything back across. Drastic, but it would have worked.

  • Andrew (unregistered) in reply to my name

    Yes. I thought someone might have run rm -rf /*/ on TDWTF.com.

  • Code Refactorer (unregistered)

    I have done it, too. A long time ago I wrote a lot of MS-DOS scripts. So when I switched to Unix, I wrote "rm -rf .". It did not just erase all files in the current directory as intended, but also went up by matching "..", the parent directory. But it's not only me. I could jump in the right time when two other colleagues tried to do the same a few years ago.

  • Code Refactorer (unregistered)

    remarks for my comment above:

    I wrote rm -rf star-dot-star It seems that star-dot-star is wiki syntax and only displays a bold point. But I cannot change it anymore.

  • Linux Survivor (unregistered) in reply to misterdoubt

    It does seem rather passive-aggressive, doesn't it? "You Linuxed yourself into this mess, you just Linux yourself out of it."

    Probably a constrained support team where the Linux server guys didn't condescend to desktop support and the desktop support was all Microsoft gum-chewing weenies.

    The real WTF is ol' Milton surviving so long playing Russian Roulette with his private scripting fiefdom. But I guess karma isn't always instant. It's reassuring that stupidity still hurts, eventually.

  • Code Refactorer (unregistered)

    remarks for my comment above:

    I wrote rm -rf star-dot-star It seems that star-dot-star is wiki syntax and only displays a bold point. But I cannot change it anymore.

  • Code Refactorer (unregistered)

    remarks for my comment above:

    I wrote rm -rf star-dot-star It seems that star-dot-star is wiki syntax and only displays a bold point. But I cannot change it anymore.

  • Linux Survivor (unregistered) in reply to misterdoubt

    It does seem rather passive-aggressive, doesn't it? "You Linuxed yourself into this mess, you just Linux yourself out of it."

    Probably a constrained support team where the Linux server guys didn't condescend to desktop support and the desktop support was all Microsoft gum-chewing weenies.

    The real WTF is ol' Milton surviving so long playing Russian Roulette with his private scripting fiefdom. But I guess karma isn't always instant. It's reassuring that stupidity still hurts, eventually.

  • Code Refactorer (unregistered)

    Sorry for committing so many times. The Submit button was stuck, so I pressed it a few times more. I wish I could clean up my previous own comments myself.

  • ooOOooGa (unregistered)

    I did similar once. I was modifying some arcane php scripts that I inherited. As a result of someone using 'require_once' where 'include' would have been better, a variable didn't get initialized. So it ran 'chmod -R 770 /' on a cloud server. As root no less.

    Now chmod 770 on every file and directory doesn't actually destroy data. It doesn't even really prevent the computer from doing most things.

    However, it does cause both ssh and sudo to flip you the bird since their configuration files are too permissive. Which makes the problem really hard to recover from since I no longer had any convenient or powerful access to the cloud server.

  • ooOOooGa (unregistered) in reply to ooOOooGa

    Oh, I should also mention that it was a Ubuntu server. So while the hosting provider had a virtual KVM access available, I could only log in directly as an unprivileged user. Ubuntu expects and requires system admins to use sudo to do privileged tasks.

  • (nodebb)

    Re: Learning...

    The intensity of the lesson learned is directly proportional to the "cost" of the mistake that you did. If it were a "small thing" that had no "cost", the lesson wouldn't be learned except by multiple dope slaps (possibly self inflicted). In this case, the "cost" (lots of irreplaceable files) was quite high and the lesson was learned quite quickly and firmly. No "dope slaps" necessary.

    I suspect that all of have had a large range of errors that we all learned from. Life goes on.

  • X (unregistered)

    set -e anyone?

  • (nodebb)

    Scripting language is like Florida man of programming.

  • masonwheeler (github)

    While all the Windows boxes were running an automated backup tool, installed automatically, none of the Linux boxes were so configured.

    It's also an ecosystem that has had solid undelete tools available pretty much forever. (Heck, the first time I ever used one was back in the DOS days! And it worked!) Is this another "we can't do that because it's newer than the 1970s" thing, or...?

  • sizer99 (google)

    Two more WTFs - first, always, always quote the variable. If it's got a space in it and you quote, you've got 'rm -rf "foo / bar"'. If you don't quote, you've got 'rm -rf foo / bar', which means 'rm -rf foo', 'rm -rf bar', AND 'rm -rf /'. Something unexpected always gets into your variable. This mistake is exactly how an iTunes update ended up erasing people's hard drives.

    Second, if you ever find yourself doing 'rm -rf "$foo/*" use 'rm -rf "$foo"; mkdir "$foo"' instead. If $foo ends up blank, then the rm -rf does nothing and the mkdir errors out. Note: the latter does wipe out dotfiles ('.foo') that the former spares, but that's usually a feature.

    Finally, be paranoid and put a check for blank/whitespace $foo and die before the rm -rf anyhow.

    I have never lost a filesystem this way, at least.

  • Aren (unregistered)

    This isn't limited to the world of *nix, either. Back in the days of MS-DOS, I had made my own system for moving deleted files into a primitive recycling bin directory. DEL was redirected to this task. All was well and good. The way to empty the bin was to DELTREE TRASH. Well, it was late at night and a few neurons cross-fired, and before I knew what I had done, I had wiped out my main directory with a badly coded batch file command to auto-purge and remake the TRASH directory.

    Oops proofing is only possible with off-system backups that you can (and know how to) restore from.

    Of course, I still need to buy said solution and implement it. If I accidentally rm -rf * my laptop, I would be kicking myself all the way to my next incarnation. It sucks being poor. :(

  • Rod (unregistered)

    I seem to recall news about a bug in the Linux version of Steam that did pretty much exactly this.

  • (nodebb) in reply to Code Refactorer

    Don't have linux access now, but I am not sure whether star-dot-star would match the parent dot-dot.

  • (nodebb) in reply to DrOptableUser

    Ubuntu 18.04. echo . does not match .. in zsh or bash or sh.

    Addendum 2019-02-14 01:38: echo *.*

  • cryin' man (unregistered)

    Real men don't take backup, they cry when it goes awry...

  • Jinks (unregistered) in reply to Code Refactorer

    "It did not just erase all files in the current directory as intended, but also went up by matching "..", the parent directory. But it's not only me. I could jump in the right time when two other colleagues tried to do the same a few years ago."

    Seems like an appropriate time to repost "The Hole Hawg":

    http://www.team.net/mjb/hawg.html

  • Simon (unregistered)

    I got caught out a few years ago by a shell script (not mine) that changed into a particular directory, and removed everything in it. Unfortunately, the error handling was a little lacking, and when the 'cd' command failed due to the directory not existing, it went ahead with the 'rm -rf *' anyway... wiping my home directory. Hurray for backups...

    (The script had been written some years earlier, and assumed a directory layout that had since changed. Following this incident, it was changed to a single-step find-and-remove command instead.)

  • Code Refactorer (unregistered) in reply to jimbo1qaz 0

    I used ksh (Korn-Shell) on either an HP-UX or IBM AIX system in the nineties and I am sure star-dot-star matched the home directory. I remember it well because it unexpectedly also matched all hidden files of the current directory( that means all files starting with dot). Restoring them was a nightmare!

  • (nodebb) in reply to Dave

    Deliberately causing damage, in order to assess your disaster recovery strategy? Actually a surprisingly good idea. Netflix does this now with Chaos Monkey. https://en.wikipedia.org/wiki/Chaos_engineering

  • ScienceGoneBad (unregistered)

    I've screwed myself once w/ an unseen space in the 'rm -rf .<space> /' caught it while there was still enough of /bin and /usr/bin to recover from a mirrored machine ... all while the system kept running ... (sweating bullets to keep from rebooting until I got everything back. The most spectacular was when we were forced to bring a mirror system online as the primary during a management caused emergency. Both had tape Backups (daily) and there was a nightly mirror script I wrote to make sure that both data loads were identical. In the course of a LONG 14+ hour day getting the mirror system configured properly as the primary and making sure the data was right, I forgot to set the ONE flag that controlled the mirroring direction in my scripts ... It mirrored an empty system onto the new primary ... tape restore took several hours and many cases of antacids on my part ... nothing like a bit of panic to keep you going

  • I dunno LOL ¯\(°_o)/¯ (unregistered)

    I just tried it on a bash 3.2.x, "echo star-dot-star" does not show dot, dot-dot, or any dotfiles, just regular files/directories with a dot inside, and "echo star" finds every one, even without a dot.

    But "echo dot-star" shows dot, dot-dot, and all the dotfiles, and "echo dot-questionmark" only shows dot-dot (still enough for recursion to ruin your day). I'm sure that the universe can find a way to make something bad happen anyhow. For instance, say, "rm -rf ${name}." or "rm -rf ${name}" (trying to remove all various extensions of a base name) could do it.

    Not properly quoting to handle blanks in a path is still a much better way to scorch the earth, especially if you have a directory name that ends in a blank.

  • I dunno LOL ¯\(°_o)/¯ (unregistered) in reply to I dunno LOL ¯\(°_o)/¯

    (add asterisks where the italics begin and end... implicit formatting from punctuation is TRWTF)

  • A confection of sugar (unregistered) in reply to Linux Survivor

    I think that's quite a common issue in organisations (for example scientific research places) where the IT group is somewhat divided between the Linux guys running the clusters and stuff which do the science, and the enterprise guys who look after the service desk, email, windows desktops, oracle databases and whatnot. The two camps don't understand, or care about, each other, and this sort of attitude problem can be the result.

    I made the rm -rf dot-star mistake in my first week of using Linux in 1992. At least it was my own machine, and no great harm done. But I learned. I have since been extremely wary of any form of automated deletion in shell scripts, especially if the parameter has come from argv, or the result of an eval.

  • Shell Scripter (unregistered)

    "Shell scripts don’t care if these variables exist or not."

    They do! They just don't do so by default. The real WTF is that they didn't start their shell scripts with:

    #!...
    set -eu
    

    Or, when the shell supports "-o pipefail" (e.g. Bash):

    #!...
    set -eu -o pipefail
    

    Is this simple best practice really so unknown?

    "set -u" means that the shell stops when an undefined variable is used, rather than blindly continuing from that unexpected state.

    "set -e" means that the shell stops when a command fails, rather than blindly continuing from that unexpected state.

    "set -o pipefail" means the same for multiple commands in a pipe (because "set -e" only uses the exit code of the last command in a chain of pipes).

    There are other useful options like "-x" and "-v" to show the commands before they are executed. That way you don't have to write down myriads of "echo do_stuff; do_stuff" just to have proper context when a command fails.

  • Linux Administrator (unregistered)

    Wait, they want people to make their own backups using custom shell scripts?

    Why don't they provide proper snapshots?. I mean, ZFS on Linux is already usable for years. You just create regular snapshots, and the user always has instant read-only access to previous versions of their whole tree, without wasting any disk space.

    Also those snapshots are easy and very efficient to backup to a different machine. It's certainly more efficient than hand-written backup scripts even when those use Rsync.

    (As soon as Btrfs stops randomly crashing and burning your data, you may replace ZFS with Btrfs here.)

  • Malgond (unregistered)

    I've also learned to make backups the hard way. On a university system with thousands of students' accounts I decided to make some order in the passwd file:

    sort [some options] </etc/passwd >/etc/passwd

    And what I got was an empty passwd file... O SHOOT! Last backup was from before the start of semester, now all the new students' entries gone... THINK! FAST! I've quickly put an entry for root only, set the password and rebooted into single-user mode

    dd if=/dev/the_root_device /some/big/volume/dump.bin

    grep [an abominable regex] <dump.bin >passwd1

    vi passwd1

    <manually remove garbage> # sort -u <passwd1 >passwd # reboot

    Uffffff.... Now go set up this damned backup!

  • Rodo (unregistered)

    oh, this reminds me of the old days.

    once I created a script that looked liked:

    cd /tmp rm -rf *

    guess what happens if /tmp does not exists

  • OlegYch (unregistered)

    this pretty much sums up everything that is wrong with our industry absence of safety in our tools, no safe guards, inadequate training

Leave a comment on “A Backup Pipeline”

Log In or post as a guest

Replying to comment #503288:

« Return to Article