• (cs)

    OMFG! See! always use proper authentification! FIRST!

  • (cs)

    Do tell, which website is this? I'd like to give them a signature of security authenticity from myself in the form of a link to the goatse man.

  • (cs)

    Once again, ladies and gentlemen, I give you: Security through incompetence!

    Please, a round of applause for our wonderful content management system!!!!

     

    (golfclap)

  • toxik (unregistered) in reply to Whiskey Tango Foxtrot? Over.

    fist!

    Security though client-side reliance doesn't pay off. And they were just unfortunate really, and uhm, had a pretty bad design too ... nevertheless.

     

    Sincerely (one of) Gene Wirchenko('s fans)

  • (cs)

    Good thing they didn't have a 'delete company' link...

  • (cs)

    I'm surprised the workaround wasn't just to add robots.txt to the site.

    If you're going to have a 2 WTFs with javascript and cookies, why not put a 3rd one in there and get a hat trick.

  • (cs)

    That is awesome.  I dont know what else to say.

  • Rich (unregistered)

    Heh, we have a similar cookie issue, though it's just a (we feel to be minor) security violation, nothing is deleted.  Our "security" involves using javascript in some of our apache/Linux/phpwiki pages that looks for a login cookie, if not, redirects to an NT IIS server for auto NTLM, which then creates a URL back to the wiki for the page login. The PHP page sees if you're in the group that should be getting the page.  For browsers, works fine since all our browsers use cookies. 

    Our google appliance of course doesn't, and indexes all our "secure" pages.  It wouln't dbe too hard to fix this in apache, but nobody has complained so far, and we tell all our wiki owners about the security hole.

  • (cs)

    That's a superb story, and an excellent lesson which I wish management had learned...

  • Omry Yadan (unregistered)

    Well, at least he can now use google cached as a backup.

  • Keith Gaughan (unregistered) in reply to GoatCheez

    The true muppetry of this is that the idiots who implemented it used GET requests (and get is meant to be side-effect free and idempotent) to implement a destructive actions.

    Whatever stupidity might be involved in the authentication method, it's this and this alone that make it suck ass. The great pity is that there are so many sites out there that do just the same thing. :@

  • (cs)

    This one qualifies as a "WTF" from a different point of view. I'd be grumbling "wtf?!?!?" over and over if my production system suddenly dissapeared like this. Kudos to Josh knowing how/where to investigate the cause and restore the site.


  • Jake (unregistered)

    I seem to recall a similar issue with another Google product - it was one of those things that's supposed to speed up internet browsing by simulating the user clicking on everything.  If this is the case, even with a good authentication scheme, you're in danger of deleting everything, because this is a direct user simulation (allowing access to all user cookies and such).

    I could be wrong about this, but I thought I read something on Slashdot about it.

  • Keith Gaughan (unregistered) in reply to Keith Gaughan

    Argh! that should be 'destructive actions' rather than 'a destructive actions'.


  • (cs)
    Alex Papadimoulis:

    As it turns out, Google's spider doesn't use cookies, which means that it can easily bypass a check for the "isLoggedOn" cookie to be "false".



    So the logic something like:

    if (getCookie(isLoggedOn) != "false")
        CongratulationsYouAreLoggedIn()

    Why would anyone do that??  (and yes I realize the point of this site is to make you wonder things like that).


    Someone needs to replace it with this clearly superior code:

    if (getCookie(isLoggedOn) != false && getCookie(isLoggedOn) != FILE_NOT_FOUND)
        CongratulationsYouAreLoggedIn()

    :)
  • Boobies (unregistered) in reply to Rich
    Anonymous:

    Heh, we have a similar cookie issue, though it's just a (we feel to be minor) security violation, nothing is deleted.  Our "security" involves using javascript in some of our apache/Linux/phpwiki pages that looks for a login cookie, if not, redirects to an NT IIS server for auto NTLM, which then creates a URL back to the wiki for the page login. The PHP page sees if you're in the group that should be getting the page.  For browsers, works fine since all our browsers use cookies. 

    Our google appliance of course doesn't, and indexes all our "secure" pages. 

    What is the point of knowingly implementing "security" that is so easily (even accidently) bypassed? It won't stop any malicious people from destroying content.


    It wouln't dbe too hard to fix this in apache, but nobody has complained so far, and we tell all our wiki owners about the security hole.



    This sounds like even more of a WTF than the subject of the thread.
  • Keith Gaughan (unregistered) in reply to Jake
    I seem to recall a similar issue with another Google product - it was one of those things that's supposed to speed up internet browsing by simulating the user clicking on everything.  If this is the case, even with a good authentication scheme, you're in danger of deleting everything, because this is a direct user simulation (allowing access to all user cookies and such).

    Google Web Accelerator. As I said, that's because people were using GET requests to implement destructive actions, which is a big no-no. All GWA did was expect that the sites you used conformed to the HTTP 1.0 and 1.1 specs. To be frank, the people who produce sites and apps that would let something any spider, proxy, or prefetcher do that deserve a Darwin award for stupidity.

    Or a Daily WTF.
  • (cs) in reply to Jake
    Anonymous:

    I seem to recall a similar issue with another Google product - it was one of those things that's supposed to speed up internet browsing by simulating the user clicking on everything.  If this is the case, even with a good authentication scheme, you're in danger of deleting everything, because this is a direct user simulation (allowing access to all user cookies and such).

    I could be wrong about this, but I thought I read something on Slashdot about it.



    http://webaccelerator.google.com
  • Billings (unregistered)

    This is actually a fairly common occurance on mis-configured Wikis.

  • (cs) in reply to Keith Gaughan
    Anonymous:
    The true muppetry of this is that the idiots who implemented it used GET requests (and get is meant to be side-effect free and idempotent) to implement a destructive actions.

    Whatever stupidity might be involved in the authentication method, it's this and this alone that make it suck ass. The great pity is that there are so many sites out there that do just the same thing. :@


    If this were digg, or slasdot, I'd totally mod that comment up.


  • (cs)

    Some years ago, a spider got caught in on of our websites which had a sophisticated tree view CGI script that used long strings of 0 and 1 to indicate which branch of the tree is opened (displays its subbranches) and which is closed. To make it worse, this parameter was implemented as a virtual path, so e.g. http://www.somedomain.at/cgi-bin/treeview/000000000 showed the tree with all branches closed and http://www.somedomain.at/cgi-bin/treeview/111111111 showed the tree with all branches open. Well, it all amounted to the equivalent of a DOS attack. That's when we learned what robots.txt is all about.

  • (cs) in reply to Jake
    Anonymous:
    I seem to recall a similar issue with another Google product - it was one of those things that's supposed to speed up internet browsing by simulating the user clicking on everything.  If this is the case, even with a good authentication scheme, you're in danger of deleting everything, because this is a direct user simulation (allowing access to all user cookies and such).

    Yes, it was the Google web accelerator.

    It does not simulate the user clicking "everything."  Only on the links, but not the buttons that submit forms.  This should not be a problem on a sane web site because GET requests should not have any effect other than to return a page.  POST requests should be the only way to do things like delete files or purchase something on a credit card, and the web accelerator would never submit a POST request before the user clicked on a button.

    Unfortunately, not all web sites are sane, and those sites treated these GET requests from the Google web accelerator as if the user had actually clicked on something.  Google really didn't do anything wrong here, but they got blamed for the bad habits of dumb programmers anyway.

  • The Internet (unregistered)

    OMG... Google ate my home page!

  • Otto (unregistered) in reply to Keith Gaughan

    Anonymous:
    Google Web Accelerator. As I said, that's because people were using GET requests to implement destructive actions, which is a big no-no.

    Total agreement. And it's not just "destructive" actions, but *any* action which causes a stateful change to the data is supposed to be performed by POSTs. That's sorta the whole point of having two separate types of requests in the first place.

    GWA caused problems, but like you said, any caching and/or "look ahead" type of proxy would have caused those same problems. The fault is with the bad web developers out there. If more people would, oh, read the friggin' relevant RFC's before they go and write their code, then they would avoid these sorts of issues.

  • Spider Squasher (unregistered) in reply to ammoQ
    ammoQ:
    Some years ago, a spider got caught in on of our websites which had a sophisticated tree view CGI script that used long strings of 0 and 1 to indicate which branch of the tree is opened (displays its subbranches) and which is closed. To make it worse, this parameter was implemented as a virtual path, so e.g. http://www.somedomain.at/cgi-bin/treeview/000000000 showed the tree with all branches closed and http://www.somedomain.at/cgi-bin/treeview/111111111 showed the tree with all branches open. Well, it all amounted to the equivalent of a DOS attack. That's when we learned what robots.txt is all about.


    Meanwhile, there are also plenty of robots (used by spammers to look for email addresses, mostly) that don't follow robots.txt at all, so you'd still get the same problem.

    Many times they'll also send a fake UserAgent to look like a browser, so the only way you can identify them is behavior (for example, list honeypot pages as disallowed in robots.txt and either not link to them, or use "invisible" links.... if anyone hits those pages, block their IP).

  • hack-o-matic (unregistered)

    Wow, I just learned something. I realize that I am opening myself up to much criticism by by posting this but I myself have created a few sites which contain pages such as DeleteClient.asp?id=123 where 123 is the client to delete. I always check a session variable to make sure they are logged in before doing the action, but sounds like that was a Very Bad Idea. The problem is how do I get the action to spawn via hyperlink where I can only pass values in the QueryString?

  • Keith Gaughan (unregistered) in reply to Otto
    Total agreement. And it's not just "destructive" actions, but *any* action which causes a stateful change to the data is supposed to be performed by POSTs. That's sorta the whole point of having two separate types of requests in the first place.

    Not quite anything. For instance, every time your webserver writes to its log file, you're causing a state change. However, such changes are ok because they don't interfere with GET's idempotency: you can do it time after time, and it's the same as doing it once: the extra data doesn't interfere with the running of the app.

    What these idiots were doing, now that is another matter entirely.

    Personally, I'm surprised nobody's brought up the "but form buttons are so ugly" chestnut. I've a way to smack that one down too.
  • Keith Gaughan (unregistered) in reply to hack-o-matic

    First, don't use query strings, use forms. That's what they're there for.

    And it's possible to style buttons so that they look decent or the same as a link. I've done it many times myself.

  • hack-o-matic (unregistered) in reply to Keith Gaughan

    Thanks Keith. You just made me less dangerous. -Bill

  • The Bears (unregistered) in reply to hack-o-matic

    <font size="2">hack-o-matic: Use a button instead. Not only will it do the right thing, it's a clue to the user that clicking it will perform some action rather than just fetching a page.

    								</span></font>
    
  • (cs) in reply to hack-o-matic

    <a href="#" onclick="myform.submit();return false;"> ...

  • rponton (unregistered) in reply to hack-o-matic
    Anonymous:
    Wow, I just learned something. I realize that I am opening myself up to much criticism by by posting this but I myself have created a few sites which contain pages such as DeleteClient.asp?id=123 where 123 is the client to delete. I always check a session variable to make sure they are logged in before doing the action, but sounds like that was a Very Bad Idea. The problem is how do I get the action to spawn via hyperlink where I can only pass values in the QueryString?


    That's a fundamentally bad design. 

    The quick fix is to have DeleteClient.asp?id=123 be a confirmation box with a button that does an HTTP POST.
  • (cs) in reply to loneprogrammer
    loneprogrammer:


    It does not simulate the user clicking "everything."  Only on the links, but not the buttons that submit forms. 



    what about links such as this? <a href="javascriptmyform.Submit()">
  • (cs) in reply to your mom

    the forum software ate my colon, but you know what i mean.  wow.  that didn't come out right.  wow, that didn't either.  i'm gonna stop now.

  • alt (unregistered)

    sweet!, can I insert a joke about the Spider mastermind? ^o)

  • (cs) in reply to hack-o-matic
    Anonymous:
    Wow, I just learned something. I realize that I am opening myself up to much criticism by by posting this but I myself have created a few sites which contain pages such as DeleteClient.asp?id=123 where 123 is the client to delete. I always check a session variable to make sure they are logged in before doing the action, but sounds like that was a Very Bad Idea. The problem is how do I get the action to spawn via hyperlink where I can only pass values in the QueryString?

    You can't.  HTML "a" tags always generate GET requests.  You need to use a "form" tag with type="post" in order to generate a POST request.

  • Marc Brooks (unregistered)

    Not to mention that ANYONE who does anything but READ data in response to an HTTP GET deserves to have all thier data deleted.  Any mutation should be only in reponse to PUT/POST/DELETE verbs.  You'll get nailed but pre-caching tools tool, and that's not going to be prevented by login checks of any ilk!

  • (cs) in reply to Keith Gaughan
    Anonymous:
    Total agreement. And it's not just "destructive" actions, but *any* action which causes a stateful change to the data is supposed to be performed by POSTs. That's sorta the whole point of having two separate types of requests in the first place.

    Not quite anything. For instance, every time your webserver writes to its log file, you're causing a state change. However, such changes are ok because they don't interfere with GET's idempotency: you can do it time after time, and it's the same as doing it once: the extra data doesn't interfere with the running of the app.

    What these idiots were doing, now that is another matter entirely.

    Personally, I'm surprised nobody's brought up the "but form buttons are so ugly" chestnut. I've a way to smack that one down too.


    Speaking of which, is anyone else annoyed that when a post in this forum gets 50 replies, the site uses POST rather than GET to go to replies 50-99?  So whenever I'm on page 2 and hit refresh I get prompted as to whether I want to resubmit form data.
  • (cs) in reply to your mom
    your mom:
    loneprogrammer:


    It does not simulate the user clicking "everything."  Only on the links, but not the buttons that submit forms. 



    what about links such as this?

    That's is not really a valid URL so when the GWA sees that in the HREF attribute, it will just do nothing.  It does not run Javascript at all.

  • Whoops (unregistered) in reply to The Internet

    Anonymous:
    OMG... Google ate my home page!

    Whoops

  • (cs) in reply to your mom
    your mom:
    what about links such as this? <a href="javascriptmyform.Submit%28%29">

    If I could slap you over the internet, I would. If you're going to play stupid javascript games instead of doing it right (input type="submit" with appropriate stylesheets to make it look acceptable) at least use <a href="http://some.meaningful.url/" onclick="myform.Submit()">.

  • (cs) in reply to your mom

    your mom:
    the forum software ate my colon, but you know what i mean.  wow.  that didn't come out right.  wow, that didn't either.  i'm gonna stop now.

    LOL! Everyone within earshot of my cube is now wondering just WTF I'm reading.... Colon jokes, gotta love'em....

    More pertinent to the thread, while the GET/POST thing is a problem, I tend to see the security issue as the more serious, and WTF-ish of the two....

    Repeat after Me:

    1. Security cannot be implemented client-side.

    2. Assume denial, prove authentication. In other words, if the security is cookie based, you need to be able to retrieve the cookie, and identify information within the cookie that proves the user is authenticated. Any failure at any step along the way results in denial of authentication.

    Solve the security problem first, then convert your simple links to POST requests...

    But that's just Me

    -Me

  • arty (unregistered) in reply to Keith Gaughan

    Uh ... anybody ever hear of 'GET' actions always being benign and 'POST' actions changing things?
    Even if you're gonna use a login cookie, wouldn't you make the user not be logged in unless the cookie was present?
    Do web developers not realize that you can telnet to port 80 and type anything you like?

  • Bob (unregistered)

    The only logical course of action is to sue Google for hacking the site.

    Sincerely,

    Bob Lablaw

  • (cs) in reply to kipthegreat
    kipthegreat:


    Speaking of which, is anyone else annoyed that when a post in this forum gets 50 replies, the site uses POST rather than GET to go to replies 50-99?  So whenever I'm on page 2 and hit refresh I get prompted as to whether I want to resubmit form data.


    WTF?!

    And here I thought this forum software was so well designed!

    _sigh_ - I wasn't annoyed because I thought it was smarter than me...
  • (cs) in reply to Bob
    Anonymous:

    The only logical course of action is to sue Google for hacking the site.

    Sincerely,

    Bob Lablaw

     

    Cool, I want in this thedailyWTF.com class action verse Google!!!

    Maybe then the owner could get some better forum software!!

  • (cs) in reply to kipthegreat
    kipthegreat:
    Anonymous:
    Total agreement. And it's not just "destructive" actions, but *any* action which causes a stateful change to the data is supposed to be performed by POSTs. That's sorta the whole point of having two separate types of requests in the first place.


    Not quite anything. For instance, every time your webserver writes to its log file, you're causing a state change. However, such changes are ok because they don't interfere with GET's idempotency: you can do it time after time, and it's the same as doing it once: the extra data doesn't interfere with the running of the app.

    What these idiots were doing, now that is another matter entirely.

    Personally, I'm surprised nobody's brought up the "but form buttons are so ugly" chestnut. I've a way to smack that one down too.



    Speaking of which, is anyone else annoyed that when a post in this forum gets 50 replies, the site uses POST rather than GET to go to replies 50-99?  So whenever I'm on page 2 and hit refresh I get prompted as to whether I want to resubmit form data.

    I always thought, "These really seem familiar" as I (re)read the first 50 posts <grin>

    Yes. That has annoyed me too!

  • (cs) in reply to Bob
    Anonymous:

    The only logical course of action is to sue Google for hacking the site.

    Sincerely,

    Bob Lablaw



    <a href="http://www.centos.org/modules/news/article.php?storyid=127">Jerry Taylor</a> on it
  • (cs) in reply to Bob
    Anonymous:

    The only logical course of action is to sue Google for hacking the site.

    Sincerely,

    Bob Lablaw

    A lawsuit againts Google.  There's a unique and novel idea!!

  • (cs) in reply to Cooper

    Cooper:
    kipthegreat:


    Speaking of which, is anyone else annoyed that when a post in this forum gets 50 replies, the site uses POST rather than GET to go to replies 50-99?  So whenever I'm on page 2 and hit refresh I get prompted as to whether I want to resubmit form data.


    WTF?!

    And here I thought this forum software was so well designed!

    _sigh_ - I wasn't annoyed because I thought it was smarter than me...

    You know... I'll bet Alex would appreciate a fundraiser to upgrade this forum to, say, VBulliten......

Leave a comment on “The Spider of Doom”

Log In or post as a guest

Replying to comment #:

« Return to Article