• (cs) in reply to csrster
    csrster:
    I work for a national internet archiving consortium. We are legally obliged to ignore robots.txt.
    Well, nobody else is going to ask this question at this point, because here on TDWTF we value immediacy and sad self-gratification above knowledge.

    But why? And by what law?

    I mean, I understand ignoring robots and crawlers and stuff; that's just common sense.

    But exactly what law mandates you to invade personal privacy when there's a Big Ole Warning Sign outside saying "Chien Mechant! Chatte Lunatique!"

    May your idiot bosses be infested with bot flies. (Yes, I'm aware that the suggestion is slightly Alanis Morrisette^W^W ironic.)

  • Willllllllllllllllll (unregistered)

    DON'T use headers to secure your web application.

    Just don't.

  • David Guaraglia (unregistered)

    Well, actually the correct answer is "never delete stuff from your server on a GET request, only on a POST request, and even then only after checking the user has the permission to delete stuff". Everything else is just fixing potholes.

  • twobee (unregistered)

    usually you also put a die() below your header() call, to be sure the script is really stopped.

  • Anon (unregistered)

    Yeah, server stored session variables that delete after authentication of action authorization are just too hard as well.

  • trifero (unregistered)

    LOve IT

    THNX

  • DamnSon (unregistered)

    Well written, dear sir.

  • passing_coder (unregistered)

    Wouldn't you usually tell robots/crawlers NOT to index a CMS page anyway? AND throw them into a honeypot if they try? On top of that wouldn't you design a CMS not to be accessible by ips other than those previously input into a db anyhow?

Leave a comment on “Well-Intentioned Destruction”

Log In or post as a guest

Replying to comment #:

« Return to Article