• (nodebb)

    I don't get it, what is it about their file storage which made backups difficult? Shouldn't it be the easiest type of structure to backup - it's just files, after all? Usually it's more tricky to backup databases and stuff.

  • (nodebb)

    Technically it was no longer their data anyway. As soon as you upload anything you give Alphabet an unrestricted permanent license to use the uploaded content in whatever matter they please. Not that, technically could could also restrict your usage of your own data etc.

    Was quite funny when I worked at an government facility and an army of lawyers one day showed up at my shared workspace there demanding to stop what I do right now because the dude usually sitting there was uploading private pictures which his office account via VPN from home office by mistake. Even though I was the wrong guy, I got a full legal coaching why it is so bad in so many way to even get close anything from Alphabet. Was quite fun and obviously I only got the privilege of thousands of dollar of legal consultation because they needed an excuse to book their appearance to basically an empty chair :-).

  • (nodebb) in reply to Mr. TA

    Alphabet's data storage is designed to put stuff in easily and Alphabet getting stuff out easily. Everything else is an aftertought by design. It took my buddy two months to download his pictures from the g-drive because the API constantly crashes and throws random errors just to prevent you from migrating "your" data.

  • (nodebb) in reply to MaxiTB

    What is Alphabet? I didn't see any mention of it in the article or linked news.

  • (nodebb) in reply to Mr. TA

    The article talks about "G-drive", which various commentators have taken as being short for "Google Drive" (they might even be right), operated by one of Alphabet's subsidiaries.

  • (nodebb) in reply to Steve_The_Cynic

    But the sources state clearly it's their own system, g-drive stands for government drive, and Google for all its faults wouldn't have been embarrassed with such a failure.

  • H-Drive (unregistered)

    Those of a certain age will likely remember their H:\ drive. Usually known for storing their Home files.

    I believe other articles have mentioned that the G was for Government. I expect it's a G:\ drive. Like H:\ or C:.

    Along with "South Korea made the choice to do it themselves" makes me think it's weird to jump to the assumption that it was outsourced.

    But that doesn't explain why the structure couldn't be backed up.

  • Álvaro González (github)

    I have to say that 3.5" floppy disks were pretty robust back in the day. I even used mine to accidentally short circuit a desk lamp when I was at university. The only thing I ever lost was a file with Pet Shop Boys lyrics I had downloaded from the internet. The main drawback was how painfully slow it was to copy many small files; you'd better create an ARJ archive and pack them there.

  • (nodebb) in reply to Mr. TA

    I don't get it, what is it about their file storage which made backups difficult?

    According to the story linked from TFA it was because it was a "large capacity low performance structure". I read that to mean that they didn't have the bandwidth for people to use it and have it backed up at the same time. They skimped on the cost.

  • Scragar (unregistered) in reply to jeremypnet

    They had planned a secondary data centre both as a backup and to improve throughput(their original system only allowed 100MB/s write speed across all users, which for obvious reasons meant during busy times people would have to wait for uploads to complete) for it's entire 8 year lifespan.

    The problem was it never got financed, every year the budget would be skipped because what they have now is working so they didn't see the value of building a second instance.

  • Scragar (unregistered) in reply to H-Drive

    The structure contained info the government didn't trust anyone else with, and when the design was originally proposed it was going to be a stop gap that could be scaled horizontally with plans for another to be built right after the first passed testing.

    Then a bunch of decision makers with no common sense realised that it worked well enough that they saw no value in building a second instance, or financing the long term ten year plan to build an actual robust decentralised infrastructure with automatic failover. And hey, they won elections on cutting wasteful spending that clearly wasn't needed for at least 8 years so that's a win for them. And when it all burned down it wasn't because the data centre was not meant to be long term, or that the UPSs were packed in tighter than they should have been because the data centre was bigger than planned due to the lack of other servers, or because their fire suppression system hadn't been maintained since it was installed, or anything like that showing a clear budget issue, it was clearly the result of someone else screwing up.

  • (nodebb)

    Reminds me of the old saying "Never underestimate the bandwidth of a truck full of magnetic tapes traveling 55mph." Well, it was applicable back in the dark ages at least.

  • Richard Brantley (unregistered)

    So let me get this straight - the whole point of the project was that end users couldn't be trusted to keep redundant copies of their data, so they created a central repository...with no redundant copies? Have I got that right?

  • Argle (unregistered)

    Circa 1979, I was a lab assistant at a community college. One of my duties was backing up our PDP-11 that had a 100M internal drive using a 50M removable platter drives. It was all pretty normal stuff. All this hidden from users, of course, who used the system via Hazeltine dumb terminals. In the manner of someone who has learned just enough to be dangerous, we got suggestions from users that "maybe you should add some floppy drives rather than keep it all in memory."

  • (nodebb)

    I'm sure you can guess how many times I've told my family members that "any file you don't have three backups of, one outside your house, is a file you don't care about". I'm also sure you can guess how many times I've had to use SpinRite.

  • Wayne (unregistered)

    A friend of mine is a system manager for a certain insurance company in Omaha. They have two off-site replicated data centers, one literally a few thousand miles away, and do failover recovery tests twice a year.

    It's just pathetic that something like this could happen. But that's government for ya.

  • (author) in reply to Wayne

    I've worked in a number of private sector companies that have had similarly idiotic file storage solutions. So I don't think it's just government, I think it's just the fact that getting anyone to invest in proper disaster recovery is hard. I mean, even Amazon has a number of services which only operate inside of US-EAST-1 as everyone found out last week.

  • Isomeme (unregistered)

    Providing graceful degradation is indeed hard. Providing guaranteed off-site cold storage backup for all data with a ~1 hour coverage SLO is trivially easy by comparison. Yes, recovery is slow from cold storage. But at least recovery happens.

  • (nodebb)

    Those of a certain age will likely remember their H:\ drive. Usually known for storing their Home files.

    My employer still maps our personal storage to the H:\ drive at login, same as 25 they did yrs ago.

  • Stuart (unregistered)

    I used to work, full time, as a backup/recovery specialist. I contracted to a large company you've probably never heard of - they called themselves IBM - and was assigned, full time, to a particular client.

    I was asked, shortly after I started, if they would be able to recover from their backups. My response: "I don't know. I hope so, but I'm not confident." The backup system was not healthy, in a whole number of ways. They weren't happy with that response; but I couldn't in all honesty give any other.

    It took me something like six months of fairly solid effort to get ahead of the curve to the point where I was able to say, "Probably, I hope." (They didn't do recovery tests, which is what would have been required for anything more solid than that.) It wasn't much later than that that a major SAN upgrade went pear shaped and they lost their central storage unit.

    They were EXTREMELY lucky that I'd put in that effort; if I hadn't, they would have been utterly screwed. It took them a long time to recover all their data, but they got there in the end and they're still in business.

    TEST YOUR BACKUPS, people. Do full blown disaster recovery tests as well. Anything less is playing with the potential loss of everything.

  • AussieSusan (unregistered)
    Comment held for moderation.
  • Duke of New York (unregistered) in reply to Scragar

    It's looking pretty horizontally scaled now, by which I mean flattened.

  • Officer Johnny Holzkopf (unregistered) in reply to Stuart
    Comment held for moderation.

Leave a comment on “A Government Data Center”

Log In or post as a guest

Replying to comment #685996:

« Return to Article