- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
this comment will auto destruct
Admin
AOL in 1996: "Data Retrieval Failure."
Sounds about right. :^)
Admin
If you rely on a form button to delete content, also remember to use the POST method, not GET!
Admin
if(!$Auth) { header('Location: index.php'); exit; }
Admin
Until the Wayback Machine fixes itself, check out aol.com from 1997 here and read all about the beta release of some crazy piece of software called "AOL Instant Messenger"
Admin
Yeah, that will teach them to not make their site RESTful!
Admin
And that's why header redirects like this need a die(); afterward... even if you never expect the code to hit it, better to be safe than sorry.
Admin
Doesn't sound all that RESTful to me. If it was, they'd have used DELETE to delete pages. And I really doubt the Wayback Machine's bot sends off DELETE-requests to pages. It should only do GET.
So if they had made it RESTful, this wouldn't have happened.
A good idea is to never let a GET-request do anything that changes things.
"Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe." http://en.wikipedia.org/wiki/HTTP#Safe_methods
Admin
I don't believe you can call yourself a web developer until you've built an app that uses hyperlinks for deletion and have all your data deleted by a search bot.
I managed to do that on a system not even connected to the internet - it was an experimental app for my own use. And then I decided to experiment with an indexing engine as well...
Admin
If this site doesn't even validate credentials to DELETE a page, what about adding or modifying content?
This sounds like every website defacer's wet dream.
Admin
This brings back memories. Had almost the same exact thing happen several years ago. We had a a backdoor admin page for a supplier "portal" for creating announcements. It had convenient little edit/delete links next to each pre-existing posting. We even had some nice internal documentation which spelled out where and how to create announcements. All was well with the world for a couple years (security was not really a concern).
Until, the infrastructure guys bought a google search appliance. It dutifully found the documentation -> backdoor -> followed each delete link. It took some effort to track it down. After some chuckling, we started blaming google for all future mysterious occurences.
Admin
Actually it has nothing to do with the header. Some headers are not supposed to be followed by a page body and Apache will actually go to great lengths to remove the body if it encounters such a reply before sending it to client. Therefore the bot can ignore all the headers it wants.
The actual WTF is sending a header but not aborting the control flow so the header gets cached for output while PHP is still happily executing the rest of the file (presumably deleting stuff).
So I say there are two WTFs: the actual one pointed above and the explanation at the end of the original story.
Admin
Exactly.
Admin
or just add rel="nofollow"
Admin
You, sir, win the internet today.
Admin
Sigh
This is why you put an exit; statement after the header call...
If you add that one line, you wouldn't have the problem. The script would just stop running after the header is sent. (Some servers won't execute redirects properly unless you do this anyway...)
Admin
So indeed, TRWTF is that even after so much trouble, Dan still doesn't understand the issue.
Admin
The article is a bit misleading about headers and PHP, but not off by much.
So, setting a Location header on a failed auth is OK. Just make sure you die() before doing anything else.
Most robots/spiders handle 30x redirects just fine. It's just that PHP doesn't die() immediately until you try to output something after that header was sent.
Yes, using a <form> would keep the spider away, but that won't keep wrongdoers at bay. Yes, deletes should be done with a POST and not a GET, but that's less important than just making sure the damned script dies when auth fails.
Admin
nofollow is a hint to search engines that you do not endorse the contents of the link because it may have been added by users. It's very badly named: it does not mean "do not follow this link", it means "I do not vouch for the value of this page".
Admin
Man, that was an Epic Fail :) Not that i prone to that ones myself, but it was hell of a funny :)
Admin
Wow, 20 comments about a PHP blunder and we've yet to see someone say the usual mantra, "TRWTF is PHP".
Admin
Or.. stop using if (!auth) and instead put if(auth) at each page.
A crawler won't auth, so they get nothing.
Admin
The WTF is that the check NEVER works unless you add exit or die after the location header. It doesn't matter whether the Location header is interperted by the client - after sending the Header, the script will still happily continue to actually delete the page - so EVEN IF the client interprets the Location header, the page will still be gone. So the problem is not exactly that the programmier chose do use an idemponent method (GET) for a non-idemponent action (delete a page) - that is in itself not a security issue. It's bad design, agreed, but unproblematic in a security sense. The problem is that the programmer either simply forgot the "exit" (which is a stupid but very human mistake, could happen to anyone) or the programmer had no clue what in client/server communications actually happens. To repeat myself: The Location header is not the problem (well, actually it's wrong since the HTTP spec requires absolute URIs but let's ignore that for the moment) - the problem is what happens after sending the header. The script should terminate instead of continue.
The other WTF here is that the article author also draws the wrong conclusion... Also, using a form and POST by itself will only solve the problems of crawlers - NOT the security issue when trying to protect against motivated attackers. FAIL I dare say. ;-)
By the way: Another WTF is that - apparently - there's no CSRF protection in that script...
Admin
Admin
If only there was a way to execute a different piece of code when the if condition failed!
Admin
I'm a little disturbed that several people now have condoned wrapping almost an entire file's worth of code in an if() block. Flow control is your friend.
Admin
If you're doing authentication on a per-page basis, what's the alternative? HTML isn't stateful, you can't control when people are going to hit which pages, so you can't really maintain flow. If you could, all you'd have to do is authenticate them on the first page, and then you could be sure that every page thereafter was fine.
Mind you, I hate crap like the example. If you do "If($auth){...} you're going to get screwed by someone who understands cookies. You need to run an actual sanity check on the cookie data.
Admin
I think everyone is off today...I always like to visit these articles about an hour after I see them in my RSS feed, then I can try to bet myself on what will be getting nitpicked and torn apart in the comments...I find that I'm right 9 out of 10 times.
Today though...I gave myself 10 to 1 that "differentiate the difference between..." would be the major topic of discussion, with camps from one side defending its usage and the other, saying it is the most epic failure on the interwebz today.
And I lost...you cut me deep, TDWTF...you cut me real deep.
Admin
Look on the bright side! You can at least recover your data through archive.org...
Admin
This happen at [a large search company with a funny name] a few years ago when I was there. Some guys was furious, screaming about lawsuits, cold-calling every corporate-looking phone number he could come up with, etc.
He claimed that we were hacking his site to the benefit of his competitors because he wasn't buying enough keywords. Of course, that's completely ridiculous. I think at one point he restored his site from the pages in the cache even.
It happened a few times. Finally someone took a look and explained to him that [funny-name]bot was there to index his site, not delete things. So having a link right there in the footer of every page that would let anyone delete the page was probably a bad idea.
I don't recall the outcome, but I think he finally went the robots.txt route to prevent his site from being crawled --by us. Odd that he'd trust his nephew the neophyte coder over people who are largely regarded as experts in their field and more than willing to help him out, but for some people, paranoia is a motivating force.
Admin
Let this be a lesson to all: Validating credentials before actually making any changes is better than only validating credentials at the entry point. Better to validate multiple times, if necessary.
Guy I knew once called me out for doing "unnecessary" authentication checks in every function that, well, required authentication. His reasoning was that the function couldn't be reached unless you were authenticated already. While that was true, at the time, it was an MVC application, and somebody could easily create a view that didn't check auth credentials. Authentication and verification goes at the lowest possible level, not the highest.
Anyway, it came back to bite him later, when somebody did just that, and since he was the one who pulled my authentication out of the source repository, he was the one who lost his job over it.
Authentication: It's good job security.
Also, while POST is your friend for state-changing events, don't rely on it as a safety mechanism. Search engines and other bots can do POSTs too, you know.
Admin
Admin
WTF No. 1: Using GETs to do deletes instead of POST.
WTF No. 2: Not properly authenticating the user in the server-side code that performs the delete/update.
WTF No. 3: Hiring numpties (who couldn't web-develop their way out of a paper bag) to build you a website.
Admin
While using POST is a lot better, and yes, you do want to make sure that the script exits after sending a location header, neither of these alone can prevent an XSRF attack. Don't think your work is done if that's all you've done to secure your web apps.
http://en.wikipedia.org/wiki/Cross-site_request_forgery
Admin
I can't believe it; it is 2009, and it is still happening. Probably the whole world knows about this.
Would be nice to track the developer that added delete command to GET request. Track those who hire such developers. Track those who hire those who hire such developers. (To make sure we never never never get into such an environment.)
Admin
if(!isset($_SESSION['usr_id']) || !isset($_SESSION['usr_name'])) { header('Location: index.php'); exit() }
would have been the quick fix.
Admin
Admin
While PHP certainly isn't TRWTF here, and I can already hear people muttering "you can write ... in any language," there are certain language communities that encourage programmers to ignore the fact that they are doing two entirely separate things (three, if you count the C in MVC): working on the client side and working on the server side. Maybe it's just the teaching of PHP that's at fault.
On the server side you can do far, far more horrible things than just allow systematic bot trawling to delete individual records or files. Anything to do with financial transactions comes to mind. Anything like "rm -rf /" would be a bag of fun.
Web applications should never do anything significant on the server side without proper authentication. (And yes, I've bitten myself on the bum with this one, too.)
Admin
There's something fundamentally wrong with any security scheme that relies on the CLIENT for enforcement. Like, umm, what if the user is deliberately trying to vandalize your site? Even if off-the-shelf browsers will respect your security, surely there are plenty of hackers out there smart enough to write their own socket programs.
This is like those stores that put up "No weapons allowed" signs with the big red slash through a picture of a handgun. Like, yeah, some maniac is going to load up with guns and bombs and head out to the mall with the intention of killing as many people as he can before the police finally arrive to stop him, he's prepared to sacrifice his own life for the pleasure or glory or whatever he gets out of killing innocent people ... and then he's going to see a "No weapsons allowed" sign on the door and say, "Oh, darn, I guess I can't do that here."
I'm not naive or arrogant enough to suppose that any security measures I build into a system are absolutely 100% secure. A determined, resourceful hacker could probably figure out a way to beat any security I've ever built. But I at least try to give him a run for his money. I want to make him go to more work than, say, having to "view source" on the page to see the password, or having to put a quote mark in his input string to break my SQL.
Admin
Exactly. TRWTF is that the article writer didn't say this.
Also, this isn't a problem of obeying headers.
If I send form data to "controlPage.php?action=delete&page=112", and the page responds with a redirect header but STILL PERFORMS THE ACTION, that is just a matter of the security being ALL FUCKED UP.
Admin
Hmmm, so Alexa joins the tiger team...
Admin
You could probably just write a simple FF plugin - ignoring Location: headers and spidering all the delete links on a site sounds like a simple task.
Admin
I wonder how on earth the system deleted pages without enclosing the delete-performing block with a check if the user (depending on the user ID, store it any way you like it) is allowed to do so. Overtrustingly damaging.... sheesh
Admin
Screw whether it's a Get or Post request.
Just make sure any page that is normally accessed by authorization actually checks that the user is logged in and authorized to perform that function.
Otherwise, any user could just type in the link.
Yes, POSTS could be hacked too. Plenty of code around to submit post data to a page as well. This is why the post target needs to check.
Admin
You just beat me to this. If the fact that its a GET vs a POST matters to your app, you're doing something very wrong. Properly check the user credentials before doing the deletion, and you will not have this problem, period.
Admin
Hmmmmm... so why not, first of all, showing the delete link ONLY if you are actually allowed to delete stuff? Why would you show a delete link to a spider? It does not really need to index delete pages anyway, so you should prevent it to go there completely with something as simple as:
if ($canDelete) echo 'Click me!';
Admin
Admin
Admin
you have single-handedly saved these horrendous comments. thank you
Admin
Not all spiders declare themselves, and anyway, how would you tell them apart? Best thing to do is actually protect delete links.