• (cs) in reply to AlpineR
    AlpineR:
    Bobbo:
    I have a rule of thumb for stuff like this: "if you have to ask what the limit of something is, you're probably doing it wrong."

    How long can a computer program be?

    What's the biggest file I can send over the network?

    How many threads can I run simultaneously?

    All of those questions have answers that changed drastically over the past twenty years. So whether your way of doing it is wrong depends on the state of technology. And the only way to determine that state is to ask.

    Yes but I think his statement still stands. Mainly because he said "you're probably doing it wrong", if he didn't say probably I would argue the same point.

    Now, do I get points for being pedentic?

  • (cs)

    I got it with the redirection once... http://foo.bar.com/DoStuffAction?NEXT_PAGE=http://foo.bar.com/StuffdoneMessage&ErrorPage=htt://foo.bar.com/TrytoRecoverFromError?ERROR_PAGE=http://foo.bar.com/OhNoes&SUCCESS_PAGE=http://foo.bar.com/StuffdoneMessage

    ...Only it got worse because there was some actual business logic going into those things, so they would nest upon themselves many many times.

    At the same time that my first project was going in, the QA guys renamed all their test machines from things like r2s4.domain.com to things like halfback.domain.com. Those extra 4 charaters (repeated many times) were just enough to push above the URL buffer. Only we didn't get a friendly 414 error in that case. No, apache coredumped. And I got blamed even though I hadn't touched anything near the code that was breaking.

    Over the next two years there, I saw the max URL size gradually increase from 256 to 512 to 1024 to 2048.

    This was a voicemail system and part of the fun was that we had a mapping utility to format phone numbers (i.e. turn 804-555-1000 to 1-804-555-1000), which had been bastardized to enforce flow control by deciding which URL to go to for a particular phone number on caller-id. I had to put my foot down when they wanted me to increase the buffer for the phone number parser to 4096 bytes...

  • Ace (unregistered) in reply to KattMan
    KattMan:
    Now, do I get points for being pedentic?

    Only if you tell me what "pedentic" means.

  • (cs) in reply to Ace
    Ace:
    KattMan:
    Now, do I get points for being pedentic?
    Only if you tell me what "pedentic" means.
    Oh come on, obviously he meant "pendantic." Sheesh.
  • (cs) in reply to JimM
    JimM:
    CT:
    This depends on the nature of the query. POST is supposed to be for queries that change state on the server. For a graph or report generation web service (such as in the original post), this is not the case, and GET is recommended.
    And as we all know, EVERYONE applies the standards to the letter...
    CT:
    A reports or graph?s generation service ... needs all the data as input.
    In which case you are changing the state on the server (only temporarily, but you are sending it data which it must store (even if only in physical memory) and manipulate) and POST would be the recommended method anyway (if the server doesn't have the data, it can hardly GET it, can it ;^) ).

    You may want to consider the logic of your arguments more carefully, next time...

    Your sarcasm is hilarious...

  • (cs) in reply to jonnyq
    jonnyq:
    And sidebar WTF... the captcha here is totally broken... it's sending a MIME of img/jpeg, which gets blocked by our company's firewall. Had to load that image on imageshack to do the damn thing.
    The firewall is the WTF here, not the captcha.
  • Anonymous (unregistered) in reply to java.lang.Chris;
    java.lang.Chris;:
    His way round this was to store session information in an untyped Java Map (I forgot to mention - he pathologically hated generics as well), serialising it, encrypting it and then hex encoding the resulting bytes.

    You have just described how many ASP.Net pages work :-( The horrors of __VIEWSTATE is haunting my dreams

  • drdamour (unregistered)

    granted this probably isn't the best use of URL's, but on one project they wanted a bunch of dynamic charts to be displayed on single page. I set up a little GD library wrapper that returned a bar chart as an image from a crafty URL GET request and just ut up a whoel bunch of images. The 414 error came as more data needed to be displayed. I was never too happy with that limitation...

  • (cs) in reply to jonnyq
    Well, AFAICR, HTTP 1.1 doesn't have a way to send a safe request with a post body, where "safe" means that the page can be refreshed, etc., without reposting data. (I can't remember the correct HTTP term for that)
    I think the term you're looking for is idempotent.
  • (cs) in reply to PSWorx
    PSWorx:
    JimM:
    CT:
    This depends on the nature of the query. POST is supposed to be for queries that change state on the server. For a graph or report generation web service (such as in the original post), this is not the case, and GET is recommended.
    And as we all know, EVERYONE applies the standards to the letter...
    CT:
    A reports or graph?s generation service ... needs all the data as input.
    In which case you are changing the state on the server (only temporarily, but you are sending it data which it must store (even if only in physical memory) and manipulate) and POST would be the recommended method anyway (if the server doesn't have the data, it can hardly GET it, can it ;^) ).

    You may want to consider the logic of your arguments more carefully, next time...

    Your sarcasm is hilarious...

    Except that it isn't sarcasm, and it's not intentionally hilarious.

    Keep huffing the nitrous oxide, though.

  • (cs)
    And I got blamed even though I hadn't touched anything near the code that was breaking.
    Yes, but did you code the original crap code that relied on URLs like that?
  • (cs) in reply to Bobbo
    Bobbo:
    I have a rule of thumb for stuff like this: "if you have to ask what the limit of something is, you're probably doing it wrong."

    e.g. "how many columns can I have in a database table?" (real example)

    That sounded like a good rule, until I actually thought about it. There are plenty of limits that it can be useful to know - maximum integer size, maximum heap size, maximum size of a certain database field, maximum value of a Unicode character, etc.

    It's true that URL length probably isn't on that list, and that columns per database table certainly isn't, but if you're someone asking these questions, then you're not someone who would know when to apply that general rule and when not to, anyway.

  • (cs) in reply to drdamour
    drdamour:
    granted this probably isn't the best use of URL's, but on one project they wanted a bunch of dynamic charts to be displayed on single page. I set up a little GD library wrapper that returned a bar chart as an image from a crafty URL GET request and just ut up a whoel bunch of images. The 414 error came as more data needed to be displayed. I was never too happy with that limitation...
    Look, like most Internet protocols, it's incredibly stupid and incredibly not future-proof. Every single RFC I can think of is a wonderful example of the inverse of Moore's Law. Which is interesting, given that on the whole they're dealing with software/

    The whole idea of building a client-server protocol without state wouldn't have happened if Tim Berners-Lee had a clue, rather than being a doofus at CERN.

    (Well, that's not quite true. Sun managed to build exactly the same problem into NFS, for very similar reasons.)

    That doesn't mean you've got to be happy with it, as an IT professional.

    It just means, and in spades, what Dirty Harry would say:

    "Man's got to know his limitations."

    In this case, 2093 bytes is it. Period.

  • (cs) in reply to real_aardvark
    real_aardvark:
    drdamour:
    granted this probably isn't the best use of URL's, but on one project they wanted a bunch of dynamic charts to be displayed on single page. I set up a little GD library wrapper that returned a bar chart as an image from a crafty URL GET request and just ut up a whoel bunch of images. The 414 error came as more data needed to be displayed. I was never too happy with that limitation...
    Look, like most Internet protocols, it's incredibly stupid and incredibly not future-proof. Every single RFC I can think of is a wonderful example of the inverse of Moore's Law. Which is interesting, given that on the whole they're dealing with software/

    The whole idea of building a client-server protocol without state wouldn't have happened if Tim Berners-Lee had a clue, rather than being a doofus at CERN.

    (Well, that's not quite true. Sun managed to build exactly the same problem into NFS, for very similar reasons.)

    That doesn't mean you've got to be happy with it, as an IT professional.

    It just means, and in spades, what Dirty Harry would say:

    "Man's got to know his limitations."

    In this case, 2093 bytes is it. Period.

    Sad thing is that "teh web" has taken over most stuff that usually went into client/server applications, and now we have tons of "web apps" that are basically ugly hacks aimed at giving statefullness to a stateless protocol.

    Then again, HTTP was thought as a document repository. None of the WWW "inventors" ever thought someone would try to do applications on that... or did they???

  • (cs) in reply to CT
    CT:
    Googe Charts API uses the same technique: http://code.google.com/apis/chart/

    I would disagree with the "too fucking long" definitions though. The URL should uniquely define the page, and for non-database backed dynamic pages such as the charts api, this means long urls. 255 characters is clearly insufficient, a few hundred KB should be long enough.

    No Google Charts, nor anyone else uses the same technique. go back and look at the URL again...
  • fyjham (unregistered)

    From the perspective of having written sites for mobile phones I've come across this one before. Did you know some phone browsers won't even go over 255 characters? Once you put a huge godawful domain a client wants on the front, add a cookieless session token (cause the phone won't support cookies) it's not too unbelievable to hit that cap.

    As far as 2000 characters goes, I can honestly say I'd never click a URL that long even if it worked :P

  • J (unregistered)

    I just pasted that string as a get after the article's URL, and it was fine!

    Pft, what a big deal over nothing. We live in an age where our GET strings should either be novel-length or we don't have enough to say.

  • (cs)

    "hide hide hide hide hide"

    I would too.

  • Gilhad (unregistered) in reply to KattMan
    Bobbo:
    I have a rule of thumb for stuff like this: "if you have to ask what the limit of something is, you're probably doing it wrong."

    "What the speed limit in town is?" What I do wrong asking this question?

    Or another one : "What is limit of human stupidity" while reading WTF :)

  • kl (unregistered)

    And this is correct. Report is a view of data, doesn't have any important side-effects (from HTTP perspective), so it should use the GET method (RFC 2616 9.1).

  • (cs) in reply to kl
    kl:
    And this *is* correct. Report is a view of data, doesn't have any important side-effects (from HTTP perspective), so it should use the GET method (RFC 2616 9.1).

    Siding with the dinosaurs here - webservers were not meant for running applications. A real program would probably have been better here. What if you want to add things to that? Enough to quadruple the URL?

    Plus, if all the information was posted to the server (assuming the server doesn't have the data already), the query could be assigned an identifier so that all the parameters don't need to be repeated each time the report is accessed and so that other people being sent the report can't unhide the hidden things.

    Eg: www.reports.com/report/SDFAWSER23423 Much better.

  • (cs) in reply to Anonymous
    Anonymous:
    java.lang.Chris;:
    His way round this was to store session information in an untyped Java Map (I forgot to mention - he pathologically hated generics as well), serialising it, encrypting it and then hex encoding the resulting bytes.
    You have just described how many ASP.Net pages work :-( The horrors of __VIEWSTATE is haunting my dreams
    Amen, I've just started my first .NET programming job and there are two things I'm already trying to work out how to never use; one is VIEWSTATE, which seems to me to simply be a way of posting 4k of extra data for no good reason, and the other is Page.PreviousPage.FindControl. You know, the one you use for getting information from the page whose form sent you to this one. Which I always thought (in a standards-compliant sense, at least) should be done using POST or GET and accessing (in ASP) Request.Form or Request.QueryString. But apparently .NET is too sophisticated for such antiquated concepts ;^(
    kl:
    Report is a view of data, doesn't have any important side-effects (from HTTP perspective), so it should use the GET method (RFC 2616 9.1).
    You know, I wrote a big long snarky comment about how wrong you are if you read the RFC properly (for instance, the definition of GET is section 9.3 of the RFC...). Then I went back to look at the URL and realised that it doesn't actually include the data being processed. All of that pipe-delimited nonsense is actual a filter definition. Which means the URI is defining how to view the data without actually containing any. And it still manages to be that long! So I can now only conclude that the URL itslf is not a wtf, but merely a pointer towards the real wtf ® (will that work?) that is handling that ridiculous filter specification. Why would anyone write something like that? WHY!?!
  • (cs) in reply to Gilhad
    Gilhad:
    Bobbo:
    I have a rule of thumb for stuff like this: "if you have to ask what the limit of something is, you're probably doing it wrong."
    "What the speed limit in town is?" What I do wrong asking this question?
    If you're driving the car, you ought to know what the speed limit is, or at least how to find out without asking. If you're not driving the car why do you care? Either way, something's not quite right...

    Don't mistake needing to know the limit of something with needing to ask it... ;^)

  • CT (unregistered) in reply to JimM
    JimM:
    CT:
    A reports or graph?s generation service ... needs all the data as input.
    In which case you are changing the state on the server (only temporarily, but you are sending it data which it must store (even if only in physical memory) and manipulate) and POST would be the recommended method anyway (if the server doesn't have the data, it can hardly GET it, can it ;^) ).

    You may want to consider the logic of your arguments more carefully, next time...

    By your logic, EVERY http request should be a POST, as it briefly changes the memory contents of the server and writes to the log files. Temporary "state change" is NOT a state change, the entire point of a state in the HTTP sense is that it is persistent.

  • Somecanuck (unregistered)

    This is the job for... a consultant!

    Replace the crazy long URL you say? NO! Instead, add a NEW piece of code that runs everything through TinyURL first!

    BRILLANT.

  • (cs)

    We ran into a problem with a web application recently where certain links wouldn't do anything. The links were good, but the browser didn't even try to navigate to the target page.

    Turns out that the developer was dumping entire variable scopes into the URL, and one particularly complex page ran up against IE's 1200-something character limit for URLs.

  • AC (unregistered) in reply to KattMan
    KattMan:
    Now, do I get points for being pedentic?

    Maybe, if you had spelled it correctly.

  • Epaminaidos (unregistered)

    I don't see the problem. In our application we have multiple requests having an URL-length of about 15k :) Definately not the best framework, but it's working fine.

  • JustChris (unregistered)

    SEO at work! Somewhere along that page is going to get indexed by Google.

  • Manic Mailman (unregistered) in reply to kl
    kl:
    And this *is* correct. Report is a view of data, doesn't have any important side-effects (from HTTP perspective), so it should use the GET method (RFC 2616 9.1).

    RFC 2616 9.1 says that GET methods should be safe - it does not say that POST methods should not be safe. It's perfectly legitimate for a POST method request to cause no change to the server state.

  • Aaron552 (unregistered) in reply to KattMan
    KattMan:
    AlpineR:
    Bobbo:
    I have a rule of thumb for stuff like this: "if you have to ask what the limit of something is, you're probably doing it wrong."

    How long can a computer program be?

    What's the biggest file I can send over the network?

    How many threads can I run simultaneously?

    All of those questions have answers that changed drastically over the past twenty years. So whether your way of doing it is wrong depends on the state of technology. And the only way to determine that state is to ask.

    Yes but I think his statement still stands. Mainly because he said "you're probably doing it wrong", if he didn't say probably I would argue the same point.

    Now, do I get points for being pedentic?

    No. I do because you spelled pedantic wrong.

    captcha: eros = Perverts

  • (cs) in reply to CT
    CT:
    JimM:
    CT:
    A reports or graphs generation service ... needs all the data as input.
    In which case you are changing the state on the server (only temporarily, but you are sending it data which it must store (even if only in physical memory) and manipulate)
    By your logic, EVERY http request should be a POST ... the entire point of a state in the HTTP sense is that it is persistent.
    Granted I worded my argument badly and the point I was making didn't come across clearly. I was intending to address the point I've now bolded, and the most important part of my reply is the part I've now italicised. It's merely a side-note that HTTP is intended to be stateless, so "the point of state in the HTTP sense" is that there isn't any ;^).

    If you read RFC2616 parts 9.3 (GET method) and 9.5 (POST method) it is quite clear GET should be used to retrieve a resources that is identified by the URI; POST should be used if you are sending data to the server to be processed - this is regardless of whether the server stores that data persistently or not.

    However, as one of my earlier comments accepts, having re-read the original article it appears that the URL under discussion does not send data to the server to be processed. It defines a (very large) number of filters to be applied to an existing report. Therefore, GET is the correct method for this request; and therein lies the WTF...

  • klmann (unregistered)

    Actually, this bug also exists on phppgadmin, an equivalent to phpmyadmin for postgres databases. If you paste SQL-Queries into the SQL-window and then change the database with the drop down menu, you'll get a nice 414 if the SQL-Queries are too long..

  • (cs) in reply to CT
    CT:
    Googe Charts API uses the same technique: http://code.google.com/apis/chart/

    I would disagree with the "too fucking long" definitions though. The URL should uniquely define the page, and for non-database backed dynamic pages such as the charts api, this means long urls. 255 characters is clearly insufficient, a few hundred KB should be long enough.

    Yeah, well 640K ought to be enough for anybody

  • csm (unregistered) in reply to ThePants999
    ThePants999:
    G:
    I don't even see the url anymore. All I see is a blond here redhead there...
    Excellent :-D

    And if you stare long enough you'll see....

    A schooner!

  • (cs)

    I have custom messages for all of the HTTP errors on my web-site. http://zzo38computer.cjb.net/errors/414.htm

  • (cs) in reply to Bobbo
    Bobbo:
    I have a rule of thumb for stuff like this: "if you have to ask what the limit of something is, you're probably doing it wrong."

    e.g. "how many columns can I have in a database table?" (real example)

    The solution to that one's simple: Just answer "five."

  • Nikolai (unregistered)

    This is just someone trying to overflow the buffer of the Web server. I see tons of these requests in the server logs. Kids playing with some stupid tools downloaded from some weird places...

  • (cs) in reply to JimM
    JimM:
    Granted I worded my argument badly and the point I was making didn't come across clearly. I was intending to address the point I've now bolded, and the most important part of my reply is the part I've now italicised. It's merely a side-note that HTTP is intended to be stateless, so "the point of state in the HTTP sense" is that there isn't any ;^).

    If you read RFC2616 parts 9.3 (GET method) and 9.5 (POST method) it is quite clear GET should be used to retrieve a resources that is identified by the URI; POST should be used if you are sending data to the server to be processed - this is regardless of whether the server stores that data persistently or not.

    However, as one of my earlier comments accepts, having re-read the original article it appears that the URL under discussion does not send data to the server to be processed. It defines a (very large) number of filters to be applied to an existing report. Therefore, GET is the correct method for this request; and therein lies the WTF...

    Good point, I suppose.

    Ignoring the inadequacies of the underlying protocol, however, I feel that the OP (even if technically correct) is a perfect example of somebody blindly following rules without the faintest notion of the consequences. GET is quite clearly wrong here.

    Given something as brain-dead as HTTP, I'm not sure what solution I'd recommend, but I'd have to think that separating the report data from the filter using something like iframes would be the way to go. Whoop-de-do, an entire MVC inside a single browser page ... perhaps it's time for another framework.

    Or a coffee.

    Or self-inflicted euthanasia.

  • James (unregistered)

    We have a content management system at work that suffers from a similar issue. Editorial is entered into a GUI editor and with the hit of a button, the entire editorial is sent via GET.

    This has led to the editors removing emboldenment and italics and replacing large sections of text with less nice one-liner text.

    In fact, there is even a celebration every time someone gets one entered and it goes through first time without hitting the 414 error.

  • (cs)

    I once inherited a PHP application that dumped an entire database table's contents into a URL parameter. In fact, the format of the URL printed is uncomfortably familiar - down to the pipe-separated data.

    Thankfully, I no longer work there.

  • Tiago Albineli Motta (unregistered)

    I had already saw a form receiving a http 414 code. The form must be using post method, but the developer should forget this.

  • s. (unregistered) in reply to Bobbo
    Bobbo:
    I have a rule of thumb for stuff like this: "if you have to ask what the limit of something is, you're probably doing it wrong."

    e.g. "how many columns can I have in a database table?" (real example)

    Not really. The valid answers are a byte, a short int, a long int and a bignum.

  • s. (unregistered) in reply to AlpineR
    AlpineR:
    How long can a computer program be?

    What's the biggest file I can send over the network?

    How many threads can I run simultaneously?

    All of those questions have answers that changed drastically over the past twenty years. So whether your way of doing it is wrong depends on the state of technology. And the only way to determine that state is to ask.

    The program should not exceed 10,000 lines per developer or it becomes unmaintainable.

    No known limit on this one, but your bottleneck is the filesystem, expect to be capped at 4GB.

    65535 threads seems to be the hard limit but likely you'll be out of RAM way before that.

    With rise of Data URIs the idea of 'too long URL' becomes somewhat obsolete.

  • Errno (unregistered) in reply to Bobbo
    Bobbo:
    I have a rule of thumb for stuff like this: "if you have to ask what the limit of something is, you're probably doing it wrong."

    e.g. "how many columns can I have in a database table?" (real example)

    I take issue with this logic -- if someone doesn't realize what certain limits are and doesn't ask, then they are doing it wrong. Asking the question is part of the process of "doing it right."

    As for your example, what is inherently wrong with asking that question? There are plenty of cases where you could wind up with tables that have an unusually high number of columns, such as summary tables that combine information from many other tables. That's a case where the question at face value may seem indicative of bad design, but it is, in fact, the opposite. Jumping to conclusions without really digging in to the problem is the real case of "doing it wrong."

  • Alex (unregistered) in reply to Bobbo

    And only 255 tables.

    I hate my DBMS

  • vermon (unregistered)

    Happened to me once. Adding products to order made an AJAX request to the server and every time the parameters were not cleared from the request so when someone bought 11 items then she would get an error.

  • mara (unregistered)

    This looks similar to a bugzilla advanced search results URL.

  • self-taught-hacker-chick (unregistered)

    after the first really large one (over 10 million) i could no longer use tinyurl's page without getting a 414.

    my record was over 29 million.

    go ahead ask me. i'll check back.

    (if importunate isn't your style, i'm sorry; right now crowing about small victories feels like its just the thing to boost my mood)

  • I've worked on that code (unregistered)

    The problem is you can't bookmark report that is generated using POST data. It is just not saved within bookmark. So you'd get empty report or some pesky error.

    Management wanted to be able to bookmark reports, so we gave them exactly what they asked for.

Leave a comment on “HTTP 414: Way Too F#%&ing Long”

Log In or post as a guest

Replying to comment #:

« Return to Article