• nico (unregistered) in reply to Franz Kafka
    Franz Kafka:
    Not all spiders declare themselves, and anyway, how would you tell them apart? Best thing to do is actually protect delete links.

    Well... that's what I was saying. The spider will not see the link at all. I guess you need to be a registered user, probably with high privileges, in order to delete stuff. Definitely a spider does not fall in this category, whether it declares itself or not... so just don't show it the link (as you don't show it to guest users or to normal users that can't delete stuff)

  • My Name * (unregistered) in reply to GrandmasterB
    GrandmasterB:
    Jim:
    Screw whether it's a Get or Post request.

    Just make sure any page that is normally accessed by authorization actually checks that the user is logged in and authorized to perform that function.

    You just beat me to this. If the fact that its a GET vs a POST matters to your app, you're doing something very wrong. Properly check the user credentials before doing the deletion, and you will not have this problem, period.

    It's still a good idea to use POST requests for this. I have used links to delete content once and got promptly bitten by an authorized user who used a web mirror tool on that site. Done right the data will at least survive well behaved tools in the hands of your average user.

  • (cs)

    FWIW, I always use Location, followed by exit; and/or die();

  • 50% Opacity (unregistered)

    I think somebody put something in my morning coffee, I'm starting to things twice, thrice, nay... I could swear I saw the die()/POST/header()/if($auth) comments at least a hundred million times in the last five minutes.

    I'm going back to bed...

  • 50% Opacity (unregistered) in reply to 50% Opacity
    50% Opacity:
    I think somebody put something in my morning coffee, I'm starting to see things twice, thrice, nay... I could swear I saw the die()/POST/header()/if($auth) comments at least a hundred million times in the last five minutes.

    I'm going back to bed...

    Yeah, something's definitely wrong with this coffee...

  • Fnord Prefect (unregistered) in reply to RHuckster
    RHuckster:
    Look on the bright side! You can at least recover your data through archive.org...
    I've had to do that a couple of times before - mostly due to users deleting things and not having any sort of backups.
  • Gretta (unregistered)

    input type="button"

    No, no, no, no. no!!! It is:

    input type="submit"

    Whoever came up with the "button" that doesn't do anything unless I give the entire internet permission to totally fsck my browser (also called enabling scripts) should be shot. In the leg, so as to live long enough for the torture.

    The form submit capability is there for a reason. Use it!

    And oh yeah, GETs should be idempotent. If you didn't know that already, kindly remove your stinking excrement from my web.

  • Danny Fullerton (unregistered) in reply to Captain Oblivious

    Using a POST (input button) instead of a GET (link) is not a proper solution for this threat. It would have fix the case for the crawler but a hacker could have done de same using POST method.

    Use Get or POST when ever you want to. Just remember de difference and the guidance from the w3c: get is for navigation, post is for data.

    my 2 cents

  • Laguana (unregistered) in reply to Gretta
    Gretta:
    And oh yeah, GETs should be idempotent. If you didn't know that already, kindly remove your stinking excrement from my web.

    But deleting the whole website is idempotent, unless it fails when there is nothing left to delete...

  • (cs) in reply to rfsmit
    rfsmit:
    brazzy:
    I don't believe you can call yourself a web developer until you've built an app that uses hyperlinks for deletion and have all your data deleted by a search bot.
    That explains a lot. Just don't tell the Web developers who know how to capitalize properly.
    Oh please, for the love of god... That is so extremely mind-numbingly irrelevant..

    Why oh why can't grammar people just stay at their own linguistics forums and keep away from any technical debates on the web? The signal-to-noise ratio isn't exactly increased by their continuous super-relevant insights.

    [sorry, had to blow off some steam - am happy again now]

  • (cs) in reply to nico
    nico:
    Franz Kafka:
    Not all spiders declare themselves, and anyway, how would you tell them apart? Best thing to do is actually protect delete links.

    Well... that's what I was saying. The spider will not see the link at all. I guess you need to be a registered user, probably with high privileges, in order to delete stuff. Definitely a spider does not fall in this category, whether it declares itself or not... so just don't show it the link (as you don't show it to guest users or to normal users that can't delete stuff)

    And what happens if an authorized user copies the delete link to his personal webpage? You know, for quick access and stuff.

  • (cs) in reply to GrandmasterB
    GrandmasterB:
    You just beat me to this. If the fact that its a GET vs a POST matters to your app, you're doing something very wrong. Properly check the user credentials before doing the deletion, and you will not have this problem, period.

    Until, that is, a perfectly legitimate and fully authorized user installs a browser plugin to preview links or speed up browsing by prefetching them.

  • (cs) in reply to diaphanein
    diaphanein:
    brazzy:
    I don't believe you can call yourself a web developer when you've built an app that uses hyperlinks for deletion and have all your data deleted by a search bot.
    FTFY
    I don't know about you, but I consider experience gained through personal mistakes immeasurably more valuable than a conviction that you're too good to make mistakes.
  • plaga (unregistered) in reply to Satanicpuppy
    Satanicpuppy:
    Mind you, I hate crap like the example. If you do "If($auth){...} you're going to get screwed by someone who understands cookies. You need to run an actual sanity check on the cookie data.
    Which is what you should do for all data anyway!
  • (cs)

    I got hacked by Google once. It turned out to be code roughly equivalent to this one:

    if (!isset($user) || (array_key_exists($user, $passwords) && $passwords[$user] == $password)) {
      echo "congratz you're admin";
    }

    Please do not ask why I thought this would be a good idea, I did not know either when I finally figured it out.

    Needless to say, this page doesn't have confirmation forms for deletion, because I didn't want to go through that trouble.

  • Chip (unregistered) in reply to Madball
    Madball:
    This brings back memories. Had almost the same exact thing happen several years ago. We had a a backdoor admin page for a supplier "portal" for creating announcements. It had convenient little edit/delete links next to each pre-existing posting. We even had some nice internal documentation which spelled out where and how to create announcements. All was well with the world for a couple years (security was not really a concern).

    Until, the infrastructure guys bought a google search appliance. It dutifully found the documentation -> backdoor -> followed each delete link. It took some effort to track it down. After some chuckling, we started blaming google for all future mysterious occurences.

    In my much younger (and much more naive) days, I was involved in developing a custom CMS for a small online shop. One day we ran Xenu link checker on it, which obligingly followed all the unguarded "delete" links and wiped the whole stock database.

    Doh!

  • gilhad (unregistered) in reply to Ryan
    Ryan:
    Or.. stop using if (!auth) and instead put if(auth) at each page.

    A crawler won't auth, so they get nothing.

    That is bad aproach - the script should exit as soon as possible. if (!auth) {.. exit"} on begin of the page is clear visible
    if(auth){
    some long stuff with many other if(something){
    }} //oops misstype, i want only one level down
    delete_all()
    } // why i get syntax error here, after deleting everything?
    
    is much harder to keep
  • gilhad (unregistered) in reply to Ryan
    Ryan:
    Or.. stop using if (!auth) and instead put if(auth) at each page.

    A crawler won't auth, so they get nothing.

    That is bad aproach - the script should exit as soon as possible. if (!auth) {.. exit"} on begin of the page is clear visible
    if(auth){
    some long stuff with many other if(something){
    }} //oops misstype, i want only one level down
    delete_all()
    } // why i get syntax error here, after deleting everything?
    
    is much harder to keep
  • anon (unregistered) in reply to jonnyq
    jonnyq:
    Just make sure you die() before doing anything else.

    This is what I will be telling everyone today..

  • puppa! (unregistered)

    Does everybody here know what is an header, and what the client can do with it, if the true meaning is an attack?

  • (cs) in reply to nico
    nico:
    Franz Kafka:
    Not all spiders declare themselves, and anyway, how would you tell them apart? Best thing to do is actually protect delete links.

    Well... that's what I was saying. The spider will not see the link at all. I guess you need to be a registered user, probably with high privileges, in order to delete stuff. Definitely a spider does not fall in this category, whether it declares itself or not... so just don't show it the link (as you don't show it to guest users or to normal users that can't delete stuff)

    Well, that's what the validation code was for, wasn't it? If you're not authorised to view the page you get redirected back to the index....

    ...Unless you're one of those clients that ignore Location: headers.

      if(!isset($_SESSION['usr_id']) || !isset($_SESSION['usr_name']))
      {
          header('Location: index.php');
      }
      // display big long list of delete links, e.g.,
      echo 'Kill Page 94'';
    

    Spider hits this code, it's not authorised, so it gets a Location header (which it ignores) followed by a big long list of delete links, which it blindly starts following. It's a good bet that the validation mechanism on subsequent pages is the same.

    Even if the Location: header is respected by the client, there is nothing in the call to header() that terminates the script's processing. The server still goes to the effort of generating that big long list of delete links: I guess it's only blind luck (to put it charitably) that prevented anyone from seeing what would happen if an unauthorised user tried to access the page that actually performs the deletion. Prior, that is, to it happening in the wild. (In other words: no, it's not "all well and good if the client respects the Location header").

    The only difference caused by the client ignoring "Location:" is that it didn't ignore the content that followed (which according to spec is supposed to offer alternative URIs for the requested resource anyway).

  • biff (unregistered)

    "..differentiate the difference..."????

    wtf?

  • Arie Kanarie (unregistered) in reply to Harold
    Harold:
    if(!isset($_SESSION['usr_id']) || !isset($_SESSION['usr_name'])) { header('Location: index.php'); exit() }

    would have been the quick fix.

    If you consider producing a parse error to be a quick fix....

    DoH!

  • (cs)

    This is not just a PHP WTF - it's an HTTP WTF.

    There are too many web developers who don't understand the basic tenets of HTTP. Including simple things such as, Location headers don't force the client to do anything.

    I've tried to explain this before and been met with blank stares (or at least I imagine the people on the other end of the interwebs are staring blankly) when pointing out that following headers is technically entirely optional for browsers.

    Of course, the PHP should have included an exit() or die() after the header() line - but you don't understand this until you understand HTTP.

  • Anonym (unregistered)

    A similar thing happened to a friend of mine. He had to (for some reason) remove the security on his phpMyAdmin for a few minutes. Of course, just then, google wanted to index his page. poof, no data.

    Extreme timing, i must say.

  • dman (unregistered) in reply to rfsmit

    Is this a WTF-classic, or just a re-run of "Google ate my website" http://thedailywtf.com/Articles/The_Spider_of_Doom.aspx ?

    ... and to rfsmit : Folk in this decade who feel the need to capitalize "The Web" or "The Internet" are the same folk who still use phrases like "The Cancer", "The Aids" or "The Iraq". Once someone understands how it works, they stop calling devices the "Tele-Phone" or "Auto-Mobile". You'll get there eventually.

  • Old Git (unregistered) in reply to rohypnol
    rohypnol:
    ckoppelman:
    or just add rel="nofollow"
    Yeah, that'll teach those perky script kiddies to stay away from your website!
    rel="nofollow getoffmylawn"
  • Homer Jay (unregistered) in reply to dman
    dman:
    Once someone understands how it works, they stop calling devices the "Tele-Phone" or "Auto-Mobile". You'll get there eventually.

    "Tele-ma-phone". "Auto-ma-mobile". Aww, that was easy!

  • Dr. Bunsen Honeydew (unregistered) in reply to biff
    biff:
    "..differentiate the difference..."????

    wtf?

    It means to measure the rate of change of the difference between "edit" and "delete" with respect to time, or some other variable.
  • nico (unregistered)
    And what happens if an authorized user copies the delete link to his personal webpage? You know, for quick access and stuff.

    Nothing, because you also have the check in the page that actually issues the DELETE.

    I was just saying that if you start by putting the check in the frontend page, where you have the link to the DELETE page you're even less prone to this kind of disaster, and you don't expose the page address to everyone.

  • (cs) in reply to Captain Oblivious
    Captain Oblivious:
    If you rely on a form button to delete content, also remember to use the POST method, not GET!
    What he said!

    Additionally, there is also robots.txt which one can use...

  • PHPer (unregistered)

    I never use the location header for just this reason

    Captcha explains it all: validus

  • cracky (unregistered)

    Wow - 1998 called. It wants its WTF back.

  • Paul (unregistered)

    Reminds me of the webmail application I worked on in the late 90s.

    I discovered if I "cleverly" crafted an HTML email with a couple of img tags in it, like: [image] [image]

    I could delete most of another users Inbox (not necessarily all since the emptyTrash might have kicked off before the delete was finished).

    After a bit more investigation I found I could do pretty much everything via img tags. For instance, something like this: [image]

    would add a forwarding address to the account. Since desc and name were empty, it wouldn't show up in the forwarding address section of the user's profile.

    Or I could fill an email with a couple thousand: [image] [image]

  • Coward (unregistered) in reply to jonnyq

    QFT

    jonnyq:
    The article is a bit misleading about headers and PHP, but not off by much.
    $authorized = authorize(); // let's assume this is false
    if(!$authorized) {
      // this, by default, sets a 302 status header
      header("Location: index.php");
      // because I'm stupid, this was omitted
      // die(); 
    }
    
    // this works. PHP happily continues after the header() call
    deleteSomething(); 
    
    // this fails.  Since the 302 status header is set, PHP is going to die() before sending any actual output.
    echo "Hello World!";
    
    // this fails, too.  PHP done died.
    deleteSomethingElse(); 
    

    So, setting a Location header on a failed auth is OK. Just make sure you die() before doing anything else.

    Most robots/spiders handle 30x redirects just fine. It's just that PHP doesn't die() immediately until you try to output something after that header was sent.

    Yes, using a <form> would keep the spider away, but that won't keep wrongdoers at bay. Yes, deletes should be done with a POST and not a GET, but that's less important than just making sure the damned script dies when auth fails.

  • Pedal to the Metal (unregistered) in reply to wee

    This is exactly why 'web accelerators' never took off (for example, the one from [a large search company with a funny name]), some online 'forums/webhosting/etc.' had/have "Delete Account" as a hyperlink (good bye account!)...also could be one of the lesser reasons why there's the presence of Captchas on web pages today...

  • Simon (unregistered)

    "Location" is only meaningful if the response code is 3XX or 201.

  • Simon (unregistered) in reply to Simon

    never mind... IA is probably doing the right thing; the Location isn't being ignored; it's just too late.

  • Wyrd (unregistered) in reply to jonnyq
    jonnyq:
    The article is a bit misleading about headers and PHP, but not off by much. /* code snipped out */

    So, setting a Location header on a failed auth is OK. Just make sure you die() before doing anything else.

    Most robots/spiders handle 30x redirects just fine. It's just that PHP doesn't die() immediately until you try to output something after that header was sent.

    Yes, using a <form> would keep the spider away, but that won't keep wrongdoers at bay. Yes, deletes should be done with a POST and not a GET, but that's less important than just making sure the damned script dies when auth fails.

    Thanks for explaining this clearly. I haven't done any real web development and I was having a hard time understanding why the article was arguing for POST instead of GET, but most of the commentors were arging for a die() to be inserted. Now it makes sense. :-)

    Also, thanks for showing me the BBCode "code" tag. (Yeah, all I had to do was click BBCode Okay, but I didn't.)

    -- Furry cows moo and decompress.

  • (cs) in reply to rfsmit
    rfsmit:
    brazzy:
    I don't believe you can call yourself a web developer until you've built an app that uses hyperlinks for deletion and have all your data deleted by a search bot.
    That explains a lot. Just don't tell the Web developers who know how to capitalize properly.

    So you are saying brazzy shouldn't tell himself?

  • hobart (unregistered) in reply to gilhad
    gilhad:
    Ryan:
    Or.. stop using if (!auth) and instead put if(auth) at each page.

    A crawler won't auth, so they get nothing.

    That is bad aproach - the script should exit as soon as possible. if (!auth) {.. exit"} on begin of the page is clear visible
    if(auth){
    some long stuff with many other if(something){
    }} //oops misstype, i want only one level down
    delete_all()
    } // why i get syntax error here, after deleting everything?
    
    is much harder to keep

    Maybe use better laid out code?

    if(auth)
    {
        some long stuff with many other if(something)
        {
        }
        delete_all();
    } // Close brackets lined up with open brackets - should not be difficult to spot mismatch
    

    Or, even better, try using functions

    if(auth)
    {
        do_deletion();
    } // Close brackets lined up with open brackets - should 
    
    
    do_deletion()
    {
        some long stuff with many other if(something)
        {
        }
        delete_all();
    }
    

    As far as I can tell things like

    exit()
    are nothing more than rebranded gotos, with all of the problems that they bring. For a simple one-line test at the top of the page, it may not be too much of an issue but if you start scattering them throughout your code as soon as you've reached a certain point, then trying to check whether a particular bit of code is going to be executed can be a nightmare.

  • David (unregistered) in reply to Jim

    Yup that should have worked. I also sometimes use a javascript redirect (document.location) and for the noscript-ers beyond us I would've made a clickable hyperlick to the index.php.

  • nf (unregistered)

    header("location: someurl"); exit; // fixed.

  • Matt Bernier (unregistered) in reply to Jim

    Exactly! If you put exit; or exit() you will always STOP execution of the script immediately. PHP will not continue to execute and even robots will only see a blank unfunctional page.

  • Darkstar (unregistered)
    Keep that in mind when designing your systems; use <input type="button" value="Delete Something" />, not index me!. And maybe don't use just the Location header to secure your site.
    Now there is yet another WTF. The server must always enforce the access control model. Changing the way the client talks to the server doesn't solve anything.
  • (cs) in reply to hobart
    hobart:
    As far as I can tell things like
    exit()
    are nothing more than rebranded gotos, with all of the problems that they bring. For a simple one-line test at the top of the page, it may not be too much of an issue but if you start scattering them throughout your code as soon as you've reached a certain point, then trying to check whether a particular bit of code is going to be executed can be a nightmare.

    The main problem with goto is that it isn't easy to tell exactly where it is going. Also, it's flexible enough that one can get truly horrendous flow control.

    for and while are rebranded goto. They limit the flexibility enough that one has a great deal of difficulty making the sort of gordian knot that was common back in the days of goto.

    exit is a rebranded longjmp. It has the saving grace that there's only one place that it will longjmp to.

    While having multiple return or exit points in your subroutines or web page may make things difficult, it's nowhere near what goto (and gosub) once did.

  • (cs) in reply to Jay
    Jay:
    This is like those stores that put up "No weapons allowed" signs with the big red slash through a picture of a handgun. Like, yeah, some maniac is going to load up with guns and bombs and head out to the mall with the intention of killing as many people as he can before the police finally arrive to stop him, he's prepared to sacrifice his own life for the pleasure or glory or whatever he gets out of killing innocent people ... and then he's going to see a "No weapsons allowed" sign on the door and say, "Oh, darn, I guess I can't do that here."

    Actually, those signs are much more about allowing the store or mall security types to treat anyone with detectable weapons as a gun-toting maniac - thus giving them a chance to deal with said maniac before he starts hurting people.

    Now, thinking such a sign will keep out the maniacs, allowing one to not have in-store security is a serious WTF. However, to my knowledge, I've only encountered one store manager that delusional.

    Just because you don't see the store security types doesn't mean they aren't there. Some savvy stores have one unit on duty watching the store cameras, a uniformed unit or three on call, and a few plain-clothes guards roaming the store, looking like customers. Really savvy stores will also have a uniformed guard near each exit, in addition to the hidden security.

  • (cs) in reply to Eevee
    Eevee:
    I'm a little disturbed that several people now have condoned wrapping almost an entire file's worth of code in an if() block. Flow control is your friend.

    I only saw one before this post. Most of the prior posts indicated using die or exit in the beginning if block. That only equates to wrapping the entire file in an if block if you're one of those purists who feels that each section of code should only have one return or exit point.

    For what it's worth, while I do agree that one should limit the number of exit points for any subroutine, it usually makes code less legible to carry this to the extreme. I find it tends to work better to try to keep early returns at the very beginning. If one really needs to have a large block of code, a potential exit point, and then another large block of code, that probably should be refactored into two subroutines, which are invoked by a simple subroutine which calls the first, and then conditionally calls the second.

  • (cs) in reply to Edgeman
    Edgeman:
    And that's why header redirects like this need a die(); afterward... even if you never expect the code to hit it, better to be safe than sorry.

    This is a fundamental misunderstanding of header redirects. As other posts have mentioned, specifying that there should be a redirect header does not stop processing. If there is a die immediately after a header redirect, the code will always hit it, and your error logs will fill up with useless "errors" which are, in fact, statements of the web page actually working as designed.

    Debugging your web pages will go much easier if you keep stuff like this out of your logs. That way, when you have problems, you can go to your logs, and actually find real problems.

    (I'm submitting this response, because nobody else has indicated why one should use exit instead of die - people have only stated that one or the other should be used. It's not just a meaningless preference thing. Using one is poor practice, using the other good practice. Either is far, far better than having completely broken code like, well, the "Well-Intentioned Destruction" above.)

  • csrster (unregistered) in reply to wee

    I work for a national internet archiving consortium. We are legally obliged to ignore robots.txt.

Leave a comment on “Well-Intentioned Destruction”

Log In or post as a guest

Replying to comment #:

« Return to Article