• Ithryn (unregistered) in reply to Ithryn

    It didn't [:(]

    The sarcasm tag got cut (Nice isn't (sarcasm)). Also a misplaced word, it should read "that the first time you are entering the page "

     

    ----------
    CAPTCHA: ORANGE

    orange

    CAPTCHA: wrong

    This thing is case-sensitive?

  • Ben H (unregistered) in reply to Keith Gaughan
    Anonymous:


    I forgot IE doesn't support inherit, so sue me :-) It's fixed now, so check again. See what I mean now?

    I'm not saying it's an elixer, just that it's not used as well as it ought to be.
    K.


    Still broken. Looks like a big ugly button. Running IE7 beta 2.

    (looks like a link in FF)
  • (cs) in reply to jamesCarr
    jamesCarr:
    paddy:
    gwenhwyfaer:
    bullseye:


    PLEASE don't do this! The number of times I've clicked on what should have been a button on some page http://some.url/page and found myself directed to http://some.url/page/# because onClick events were silently discarded... If it needs to submit a form, make it a submit button. If it needs to execute JavaScript locally, make it an ordinary button. If it's a link, it should take me somewhere AND NOTHING ELSE.



    Then someone did the onClick wrong? 

    I think the old "hyperlinked documents" model is failing to keep up with the fact people are writing more and more complex systems, and by all means, use links forms, or whatever.  Last time I checked, when you show an HTML layer over (for dynamic menus etc) form elements or any embedded element gets painted overtop of the HTML, even though it is on a layer behind it. Thats a pretty good reason to use links instead of fat widgets I'd say. 
    Also, GET requests are good for "modifier" requests at times, do you want a form button just to change a default language or some indescript setting?  To update a hit counter?

    I think its a problem when a programmer has a programming stylistic reason for limiting the freedom of the designers.
    Give them some guidelines, and hard rules where they really are needed, but seriously if a designer can come up with a richer and more appealing interface that use links to submit form posts, there is no reason in the world they should be forced to dumb-down their design.


    No, you're just someone too dumb to understand html. Otherwise you wouldn't refer to them in dreamweaver speak you dumbshit.

    Naturally, "the old hyperlined document model" you refer to, of which I am assuming you mean those things called web standards, works just fine for those who actually now what they are doing, rather than dragging and dropping imaginary "widgets" and "layers" in frontpage.


    I am not very familiar with dreamweaver speak, care to enlighten me as to which terms are in its exclusive lexicon?

    I understand HTML quite well actually, I've been writing web applications for about 12 years.  The "the old hyperlined document model" that I refer to is the inane system whereby we take a system designed to link together related articles and build entire network clients out of it, which end up so convoluted that while 90% of the data transmitted with each request is just rebuilding the identical UI that was in the last request, with a tiny change of the data in the middle of the application.  Could you imagine if when typing in MS Word, every time you wanted to change the selected font, it had to rebuild the entire application layout, just update which font name appears as selected?  The most important elements of any commercial system is the UI fluidity and asthetic appeal.  Any programmer worth their salt will be up to that challenge, and given that designers will already have to make some compromises, its on the programmers to limit those infractions as much as possible.  Whining about a link that calls a POST request seems like a really trivial reason to make a designer's job that much harder.


  • John (unregistered) in reply to Keith Gaughan

    re: Prefetching,

    I Google'd for "attrs" using Firefox not long ago, and while still looking the the Google site, I was asked to accept or decline a security certificate, from a .mil site.

    Turns out Firefox was prefetching the link to atrrs.mil which offered the certificate.

  • Keith Gaughan (unregistered) in reply to Ithryn
    Anonymous:

    CAPTCHA: ORANGE

    orange

    CAPTCHA: wrong

    This thing is case-sensitive?



    Kinda. There's two reasons why it might not work. Firstly, the stencil font, even though it appears to be uppercase, is in fact lowercase. Secondly, caching sometimes interferes with the captcha's contents.

    The best you can do is either sign up or repeatedly try to satisfy the beast.
  • Keith Gaughan (unregistered) in reply to Ben H
    Anonymous:
    Anonymous:


    I forgot IE doesn't support inherit, so sue me :-) It's fixed now, so check again. See what I mean now?

    I'm not saying it's an elixer, just that it's not used as well as it ought to be.
    K.


    Still broken. Looks like a big ugly button. Running IE7 beta 2.

    (looks like a link in FF)


    Hmmm... looks fine in IE6. Any chance of linking to a screenshot? Looks like a job for IE's preprocessor comment yokes.
  • Keith Gaughan (unregistered) in reply to paddy
    paddy:
    jamesCarr:
    paddy:
    gwenhwyfaer:
    bullseye:


    PLEASE don't do this! The number of times I've clicked on what should have been a button on some page http://some.url/page and found myself directed to http://some.url/page/# because onClick events were silently discarded... If it needs to submit a form, make it a submit button. If it needs to execute JavaScript locally, make it an ordinary button. If it's a link, it should take me somewhere AND NOTHING ELSE.



    Then someone did the onClick wrong? 

    I think the old "hyperlinked documents" model is failing to keep up with the fact people are writing more and more complex systems, and by all means, use links forms, or whatever.  Last time I checked, when you show an HTML layer over (for dynamic menus etc) form elements or any embedded element gets painted overtop of the HTML, even though it is on a layer behind it. Thats a pretty good reason to use links instead of fat widgets I'd say. 
    Also, GET requests are good for "modifier" requests at times, do you want a form button just to change a default language or some indescript setting?  To update a hit counter?

    I think its a problem when a programmer has a programming stylistic reason for limiting the freedom of the designers.
    Give them some guidelines, and hard rules where they really are needed, but seriously if a designer can come up with a richer and more appealing interface that use links to submit form posts, there is no reason in the world they should be forced to dumb-down their design.


    No, you're just someone too dumb to understand html. Otherwise you wouldn't refer to them in dreamweaver speak you dumbshit.

    Naturally, "the old hyperlined document model" you refer to, of which I am assuming you mean those things called web standards, works just fine for those who actually now what they are doing, rather than dragging and dropping imaginary "widgets" and "layers" in frontpage.


    I am not very familiar with dreamweaver speak, care to enlighten me as to which terms are in its exclusive lexicon?

    I understand HTML quite well actually, I've been writing web applications for about 12 years.  The "the old hyperlined document model" that I refer to is the inane system whereby we take a system designed to link together related articles and build entire network clients out of it, which end up so convoluted that while 90% of the data transmitted with each request is just rebuilding the identical UI that was in the last request, with a tiny change of the data in the middle of the application.  Could you imagine if when typing in MS Word, every time you wanted to change the selected font, it had to rebuild the entire application layout, just update which font name appears as selected?  The most important elements of any commercial system is the UI fluidity and asthetic appeal.  Any programmer worth their salt will be up to that challenge, and given that designers will already have to make some compromises, its on the programmers to limit those infractions as much as possible.  Whining about a link that calls a POST request seems like a really trivial reason to make a designer's job that much harder.


    Paddy, you're forgetting the the web is primarily distributed hypertext. That we attempt to build applications on top of it that HTML was never designed for is not HTML's fault. If you want to build thick client apps, go do that.

    And you still haven't given me an example where you need tremendous hacks to satisfy 'UI fluidity and aesthetic appeal'. Put you money where your mouth is rather than giving out.

    Now, all this whining about HTTP POST is happening for good reason. The web was designed to scale and scale extremely well. A big part of that scaling is caching. HTTP GET is the most common request type, and for the web to scale it must be idempotent (very important distributed computing word there: look it up). Without idempotency, caching proxies couldn't exist and your servers would be continually pounded by requests.

    Protocols are contracts of expected behaviour, and part of the HTTP contract is that you allow GET is idempotent so the load can be distributed over the whole system through caching. Once you break this and have HTTP GETs causing significan side-effects, you're buggering with the contract.

    And javascript links are bad because they break the user's expectations of what a link does.

    K.
  • (cs) in reply to loneprogrammer
    loneprogrammer:
    paddy:

    I think the old "hyperlinked documents" model is failing to keep up with the fact people are writing more and more complex systems, and by all means, use links forms, or whatever.  Last time I checked, when you show an HTML layer over (for dynamic menus etc) form elements or any embedded element gets painted overtop of the HTML, even though it is on a layer behind it. Thats a pretty good reason to use links instead of fat widgets I'd say. 

    No, that's a bug in IE.  Every other web browser does not do this.  IE should be fixed to respect the HTML standard.

    paddy:

    Also, GET requests are good for "modifier" requests at times, do you want a form button just to change a default language or some indescript setting?  To update a hit counter?

    No, that violates the HTTP standard!
    Quote RFC 2616:
    13.9 Side Effects of GET and HEAD

    Unless the origin server explicitly prohibits the caching of their
    responses, the application of GET and HEAD methods to any resources
    SHOULD NOT have side effects that would lead to erroneous behavior if
    these responses are taken from a cache.



    IE bug or not (actually it happens in Firefox too) its something that designers have to deal with, since so many consumers use IE and they are the ones that need to be made happy.  Any design plan that relies on MS fixing a bug in IE is doomed to failure.
  • Dan Fraser (unregistered) in reply to Keith Gaughan

    There are a lot of bots on the web that will POST spam to anything they see that looks like a from as well, so if you've wrapped your delete operation as a form submit button, you're not much safer.

    Still makes sense to use GET and POST properly though.

    And authenticate!

  • (cs) in reply to Keith Gaughan
    Anonymous:
    paddy:
    jamesCarr:
    paddy:
    gwenhwyfaer:
    bullseye:


    PLEASE don't do this! The number of times I've clicked on what should have been a button on some page http://some.url/page and found myself directed to http://some.url/page/# because onClick events were silently discarded... If it needs to submit a form, make it a submit button. If it needs to execute JavaScript locally, make it an ordinary button. If it's a link, it should take me somewhere AND NOTHING ELSE.



    Then someone did the onClick wrong? 

    I think the old "hyperlinked documents" model is failing to keep up with the fact people are writing more and more complex systems, and by all means, use links forms, or whatever.  Last time I checked, when you show an HTML layer over (for dynamic menus etc) form elements or any embedded element gets painted overtop of the HTML, even though it is on a layer behind it. Thats a pretty good reason to use links instead of fat widgets I'd say. 
    Also, GET requests are good for "modifier" requests at times, do you want a form button just to change a default language or some indescript setting?  To update a hit counter?

    I think its a problem when a programmer has a programming stylistic reason for limiting the freedom of the designers.
    Give them some guidelines, and hard rules where they really are needed, but seriously if a designer can come up with a richer and more appealing interface that use links to submit form posts, there is no reason in the world they should be forced to dumb-down their design.


    No, you're just someone too dumb to understand html. Otherwise you wouldn't refer to them in dreamweaver speak you dumbshit.

    Naturally, "the old hyperlined document model" you refer to, of which I am assuming you mean those things called web standards, works just fine for those who actually now what they are doing, rather than dragging and dropping imaginary "widgets" and "layers" in frontpage.


    I am not very familiar with dreamweaver speak, care to enlighten me as to which terms are in its exclusive lexicon?

    I understand HTML quite well actually, I've been writing web applications for about 12 years.  The "the old hyperlined document model" that I refer to is the inane system whereby we take a system designed to link together related articles and build entire network clients out of it, which end up so convoluted that while 90% of the data transmitted with each request is just rebuilding the identical UI that was in the last request, with a tiny change of the data in the middle of the application.  Could you imagine if when typing in MS Word, every time you wanted to change the selected font, it had to rebuild the entire application layout, just update which font name appears as selected?  The most important elements of any commercial system is the UI fluidity and asthetic appeal.  Any programmer worth their salt will be up to that challenge, and given that designers will already have to make some compromises, its on the programmers to limit those infractions as much as possible.  Whining about a link that calls a POST request seems like a really trivial reason to make a designer's job that much harder.


    Paddy, you're forgetting the the web is primarily distributed hypertext. That we attempt to build applications on top of it that HTML was never designed for is not HTML's fault. If you want to build thick client apps, go do that.

    And you still haven't given me an example where you need tremendous hacks to satisfy 'UI fluidity and aesthetic appeal'. Put you money where your mouth is rather than giving out.

    Now, all this whining about HTTP POST is happening for good reason. The web was designed to scale and scale extremely well. A big part of that scaling is caching. HTTP GET is the most common request type, and for the web to scale it must be idempotent (very important distributed computing word there: look it up). Without idempotency, caching proxies couldn't exist and your servers would be continually pounded by requests.

    Protocols are contracts of expected behaviour, and part of the HTTP contract is that you allow GET is idempotent so the load can be distributed over the whole system through caching. Once you break this and have HTTP GETs causing significan side-effects, you're buggering with the contract.

    And javascript links are bad because they break the user's expectations of what a link does.

    K.


    I have no problem with hypertext documents being used to display hypertext content.  I never said it was HTML's fault that we use it to build application UIs, I am just saying its used in a convoluted way in order to get the results we want.  It is still better than writing thick clients of course, but that doesn't make HTML any more ideal just because the alternatives are less.

    I already mentioned that form components (any native widget) always draw over dynamic HTML in at least a good portion of modern web browsers, isn't that a good reason to avoid using form components in those situations?

    Caching proxies are great for the static content the web was built for, but how many GET requests actually show static content these days in business applications?  Every request shows a different banner ad, may or may not say "hello [your name here]" and "you have [x] messages" depending on if you logged out via another link or not, or if someone sent you a message since the last time you pulled the homepage. 

    I absolutely agree that grevious misuse of GET requests is a Very Bad Thing (such as in the OP), and you should not modify the server state via a GET request regarding its content as a general rule.  But consider Yahoo Mail:  you have to click a hyperlink, which calls a GET request to view your mail message, and changes the server's data regarding that message from "unread" to "read" right then. 
    If you used a POST for that, you could not open the message in a new tab, you would bloat the HTML by a large amount to load it up with tons of form tag sets, and any cache of the message list page would be bad anyway, as it would show old data regarding which messages were read or not, meaning you'd need to get that page by POST request only too.  So much for clicking a link to your inbox - that would have to be a form button then too. 

    In the Real World, people tend to use GET requests a whole lot, and the pages expire immediately instead of caching, because the content does or can constantly change even for identical GET requests. 

    And secondarily:  Using a hyperlink to submit a form does not break any client/server protocal contract as the server does not care how the request is generated as long as its valid.
  • (cs) in reply to Keith Gaughan
    Anonymous:

    And javascript links are bad because they break the user's expectations of what a link does.

    K.


    Just a followup about this:  how would you recommend interacting with javascript content? 

    Personally, I would contend that the design should give the user a good expectation of what the link does.  If a "next" link goes to another page, or shows the next image in an image tag on the existing page via javascript, what matters is that the system is intuitive to the target audience, even if some engineers find it baffling that you'd use a hyperlink for anything but hyperlinking to another URL despite a design that clearly indictates the intended functionality.
  • (cs) in reply to paddy
    paddy:
    Personally, I would contend that the design should give the user a good expectation of what the link does.  If a "next" link goes to another page, or shows the next image in an image tag on the existing page via javascript, what matters is that the system is intuitive to the target audience, even if some engineers find it baffling that you'd use a hyperlink for anything but hyperlinking to another URL despite a design that clearly indictates the intended functionality.


    Wow, such a bias to what the web page looks like, visually, to a human... which would be great for a real user interface on a real client/server application. But the web is designed for shuffling text, with pointers to other text, and I actually want to use it that way. By all means avail yourself of all the flashy features you want when you know they are supported - but for God's sake test that the browser you're sending stuff to supports it first, and - much more importantly - degrade gracefully when it doesn't! Not doing that work is just negligent, however you dress it up.

    Remember, it's not just me and my twisted preference for browsers that don't support JavaScript. What about users dependent on screen readers? Should blind people not be allowed to submit forms? (In a day and age where anti-discrimination legislation is springing up all over the place, this could turn into a legal issue too, before too long.)

    Oh, and whilst you're focusing on client/server contracts, don't forget the service provider/customer contract - or, cynically put, "if you tell me I'm too primitive to use your services, how many people can I dissuade from being your customers?"

  • (cs) in reply to gwenhwyfaer
    I:
    for God's sake test that the browser you're sending stuff to supports it first, and - much more importantly - degrade gracefully when it doesn't!


    If you want an example of how this can be done well, look at GMail, whose "basic HTML" view is good enough that I sometimes even prefer it to the standard view. That should be the benchmark.

  • ki (unregistered) in reply to Omry Yadan
    Anonymous:
    Well, at least he can now use google cached as a backup.

    lol

  • Ben (unregistered) in reply to arty
    Anonymous:
    Uh ... anybody ever hear of 'GET' actions always being benign and 'POST' actions changing things?

    I don't know which rock I've been hiding under, but I've simply never heard that.+o(
    Perhaps it's so obvious that nobody thought of mentioning it or writing it in a book?
  • (cs) in reply to paddy
    paddy:

    I have no problem with hypertext documents being used to display hypertext content.  I never said it was HTML's fault that we use it to build application UIs, I am just saying its used in a convoluted way in order to get the results we want.  It is still better than writing thick clients of course, but that doesn't make HTML any more ideal just because the alternatives are less.


    That's the problem with technology: it alway falls short of how we'd like it to be.

    paddy:

    I already mentioned that form components (any native widget) always draw over dynamic HTML in at least a good portion of modern web browsers, isn't that a good reason to avoid using form components in those situations?


    And I've already said that you're wrong. There's one, and only one marginally popular browser that does that, and that's IE, and even then it only occurs with the SELECT element. And that's a problem that MS set out to fix with IE7. Browsers rarely use native widgets for that very same reason.

    paddy:

    Caching proxies are great for the static content the web was built for, but how many GET requests actually show static content these days in business applications?  Every request shows a different banner ad, may or may not say "hello [your name here]" and "you have [x] messages" depending on if you logged out via another link or not, or if someone sent you a message since the last time you pulled the homepage. 



    Caching proxies are great full stop. That's why HTTP provides mechanism for where you really, really don't want content cached. In HTTP/1.0, there was Pragma, and then in HTTP/1.1, we got Cache-control, which gave us fine grained control over caching. There's also the Expires header for when we want to limit the amount of time information is cached, conditional GETs using the ETag and Last-Modified headers so that we only end up using bandwidth to transmit fresh information, and a whole host of other facilities.

    Me, I've very careful with caching. Where at all possible I'm able to serve information that's slightly out of date, I do. It's only in places where the information must be up-to-date that I prevent caching. Also, when I'm serving a user that's logged into an app, if I'm sending information to them that I want cached but don't want any intermediate servers to hold onto, I use Cache-Control: private to ensure it's only cached in their local cache and not on any intermediate server.

    I care about ensuring that the apps I write can cope with scaling up and down smoothly. Do you?

    paddy:

    I absolutely agree that grevious misuse of GET requests is a Very Bad Thing (such as in the OP), and you should not modify the server state via a GET request regarding its content as a general rule.  But consider Yahoo Mail:  you have to click a hyperlink, which calls a GET request to view your mail message, and changes the server's data regarding that message from "unread" to "read" right then.


    Something like that is an edge case, I'll admit that. Similarly, logging requests is another edge case that gets through. But you seem to be misunderstanding what idempotence is.

    paddy:

    In the Real World, people tend to use GET requests a whole lot, and the pages expire immediately instead of caching, because the content does or can constantly change even for identical GET requests.


    The vast majority cache. Really, they do. It might not seem like it sometimes, but they do. Caching doesn't have to last all that long. If a cached copy lasts for a second, and that's a cache of a page on a server that's being slashdotted, that's an awful lot of scalability.

    And I work in the Real World. The systems I deal with might not have to scale like the likes of Amazon or Google, but they still have to cope with a lot of pounding, and the more processing it needs to do, the more it costs and the slower things get. I take advantage of caching whereever it's necessary.


    And secondarily:  Using a hyperlink to submit a form does not break any client/server protocal contract as the server does not care how the request is generated as long as its valid.


    No it doesn't, nor did I ever say it did, but its a nasty and unnecessary hack and has been for quite some time.

    K.
  • (cs) in reply to gwenhwyfaer
    gwenhwyfaer:
    paddy:
    Personally, I would contend that the design should give the user a good expectation of what the link does.  If a "next" link goes to another page, or shows the next image in an image tag on the existing page via javascript, what matters is that the system is intuitive to the target audience, even if some engineers find it baffling that you'd use a hyperlink for anything but hyperlinking to another URL despite a design that clearly indictates the intended functionality.


    Wow, such a bias to what the web page looks like, visually, to a human... which would be great for a real user interface on a real client/server application. But the web is designed for shuffling text, with pointers to other text, and I actually want to use it that way. By all means avail yourself of all the flashy features you want when you know they are supported - but for God's sake test that the browser you're sending stuff to supports it first, and - much more importantly - degrade gracefully when it doesn't! Not doing that work is just negligent, however you dress it up.

    Remember, it's not just me and my twisted preference for browsers that don't support JavaScript. What about users dependent on screen readers? Should blind people not be allowed to submit forms? (In a day and age where anti-discrimination legislation is springing up all over the place, this could turn into a legal issue too, before too long.)

    Oh, and whilst you're focusing on client/server contracts, don't forget the service provider/customer contract - or, cynically put, "if you tell me I'm too primitive to use your services, how many people can I dissuade from being your customers?"



    I am not biased towards making a site useful to humans, but most of my clients are, and I have to make them happy. They want attractive sites that get their potential customers interested.  If they want to add some functionality for the majority that makes it hard for blind people, its their call.  They can always offer a thin version of the site if they feel it is economical.  Usually, it is cheaper to offer the thin version as well instead of slimming down the features of the main site.  You have to offer the majority of your customers the best site you can right off the bat, or they won't remain your customers.
    I agree you should gracefully tell someone they need to turn on cookies to use a site, or enable javascript, as all error conditions should.  Most web commerce isn't about accessiblity though, its about commerce, so if it works for a company's business plan to alienate 5% of web users then for good or ill it really is their call.  I use Windows, but if I can't run some company's catalog CD on a Mac, its because they targeted Windows users, not Mac users, and again its their call to target consumers that possess a specific type or level of technology.
  • (cs) in reply to Keith Gaughan
    Keith Gaughan:

    That's the problem with technology: it alway falls short of how we'd like it to be.


    Exactly, and we have to work with what it is. 

    Keith Gaughan:

    And I've already said that you're wrong. There's one, and only one marginally popular browser that does that, and that's IE, and even then it only occurs with the SELECT element. And that's a problem that MS set out to fix with IE7. Browsers rarely use native widgets for that very same reason.


    I tested this out in Firefox, it appears that a submit button paints under layers as you said, though flash still paints over.  I agree that when MS fixes it, and it becomes a rare issue, then it'll make this easier.  Still, you can't tell users to wait until a new browser is released to resolve an issue, you have to work with the technology you have at the moment, and a lot of people use IE.  I am happy to know they are working on it, thanks for pointing it out.

    Keith Gaughan:

    Caching proxies are great full stop. That's why HTTP provides mechanism for where you really, really don't want content cached. In HTTP/1.0, there was Pragma, and then in HTTP/1.1, we got Cache-control, which gave us fine grained control over caching. There's also the Expires header for when we want to limit the amount of time information is cached, conditional GETs using the ETag and Last-Modified headers so that we only end up using bandwidth to transmit fresh information, and a whole host of other facilities.

    Me, I've very careful with caching. Where at all possible I'm able to serve information that's slightly out of date, I do. It's only in places where the information must be up-to-date that I prevent caching. Also, when I'm serving a user that's logged into an app, if I'm sending information to them that I want cached but don't want any intermediate servers to hold onto, I use Cache-Control: private to ensure it's only cached in their local cache and not on any intermediate server.

    I care about ensuring that the apps I write can cope with scaling up and down smoothly. Do you?


    I agree with caching where possible, and of course scalability is very important.  Ironically it would be really nice if web applications were designed more like client-side applications, with adding elements as children of other elements, where the higher level design elements could be left intact while you flipped next/prev through data in whatever field area it is displayed in.  That would make things a lot easier on the server, and it would make it easier to partition out what data is cachable and what is not, and what is actually already on the client.  That is, without resorting to frames of course. 

    Keith Gaughan:

    Something like that is an edge case, I'll admit that. Similarly, logging requests is another edge case that gets through. But you seem to be misunderstanding what idempotence is.


    I think most web applications are edge case.  Whether you check your email, bid on auctions, process reports etc, it comes up.  Many news and shopping sites can and do utilize effective caching of course, since they do have high traffic of rather static pages.  I absolutely admit it is important to use it where it is possible.

    Keith Gaughan:

    The vast majority cache. Really, they do. It might not seem like it sometimes, but they do. Caching doesn't have to last all that long. If a cached copy lasts for a second, and that's a cache of a page on a server that's being slashdotted, that's an awful lot of scalability.

    If the site is a web application that requires the user to register and/or log in, all that traffic will still require each one gets the page generated with a "hello [x]" at the top for that user, and caching will not help.

    Keith Gaughan:

    And I work in the Real World. The systems I deal with might not have to scale like the likes of Amazon or Google, but they still have to cope with a lot of pounding, and the more processing it needs to do, the more it costs and the slower things get. I take advantage of caching whereever it's necessary.


    And of  course I agree.  The caching issue came up in context of complex user interfaces, which generally are in complex web applications such as ones that help you check your email, bid on products, or otherwise provide lots of unique data.  In those cases, the user often gets different data off the same URL even a second later.  They are also generally personalized to the user that is logged in, and cannot be reused for others logged in on other accounts. 

    Keith Gaughan:

    No it doesn't, nor did I ever say it did, but its a nasty and unnecessary hack and has been for quite some time.


    As far as I am concerned, if a designer finds it useful, and does it gracefully, the experience is transparent to the user who only knows they like how the site works.  We can of course disagree.
  • SpComb (unregistered) in reply to kipthegreat
    kipthegreat:
    Anonymous:
    I seem to recall a similar issue with another Google product - it was one of those things that's supposed to speed up internet browsing by simulating the user clicking on everything.  If this is the case, even with a good authentication scheme, you're in danger of deleting everything, because this is a direct user simulation (allowing access to all user cookies and such).

    Google Web Accelerator. As I said, that's because people were using GET requests to implement destructive actions, which is a big no-no. All GWA did was expect that the sites you used conformed to the HTTP 1.0 and 1.1 specs. To be frank, the people who produce sites and apps that would let something any spider, proxy, or prefetcher do that deserve a Darwin award for stupidity.

    Or a Daily WTF.


    It should be noted that neither Google Web Accelerator or Firefox will prefetch a URL containing query strings (although both mention in their FAQs that they retain the right to do so in the future in accordance with the HTTP specs):

    http://webaccelerator.google.com/webmasterhelp.html#prefetch3
    http://www.mozilla.org/projects/netlib/Link_Prefetching_FAQ.html#Are_there_any_restrictions_on_what_is


    mod_rewrite. It makes URLs so much nicer!

    http://foo.gov/PageDelete/2456

    It would be interesting to see some poor employee getting blamed for deleting the whole site.
  • (cs) in reply to alt
    Anonymous:
    sweet!, can I insert a joke about the Spider mastermind? ^o)

    Or perhaps Arachnotron :)

    Got a WTF myself, can't actually play Doom3 'cause I get too scared... WTF?
  • (cs) in reply to Keith Gaughan
    Anonymous:
    I forgot IE doesn't support inherit, so sue me :-) It's fixed now, so check again. See what I mean now?

    I'm not saying it's an elixer, just that it's not used as well as it ought to be.

    K.

    Doesn't work in Konqueror, though I wasn't surprised. (I get the same thing on LycosMail, which uses a similar technique to change the appearance of the login button).

  • (cs) in reply to kipthegreat
    kipthegreat:

    Come on, having a "view text only" button at the bottom of a page would look retarded.  But having a "?textonly=1" link violates standard.  But an onclick:submit() link works nicely for anyone not trapped in 1998.


    No. Your default view should be text-only friendly, and the button at the (bottom|top) should say "view rich content site". Or you could just use CSS and not bother about the rich content at all. There are very, very few sites where rich content of any sort makes sense.
  • Benjamin McKraken (unregistered) in reply to Keith Gaughan

    (Re: http://talideon.com/wiki/?doc=WikiEtiquette)

    It's a form button, huh? Very clever, Lord Smartington Cleverboots of Brainingworth Hall.

    That must be why it looks different from the links (it has a dark rectangle background for me) and acts differently from the links (caption moves when clicked, dotted border) and generally creates a jarring 'is this a button? why is it not like the other links?' feeling.

    Smooth. Real smooth.

    Seriously, if you're going to do something, do it right and test in more than one browser. If you must test in one browser, make it a standards-compliant one.

  • Hravn (unregistered) in reply to loneprogrammer
    loneprogrammer:
    paddy:

    I think the old "hyperlinked documents" model is failing to keep up with the fact people are writing more and more complex systems, and by all means, use links forms, or whatever.  Last time I checked, when you show an HTML layer over (for dynamic menus etc) form elements or any embedded element gets painted overtop of the HTML, even though it is on a layer behind it. Thats a pretty good reason to use links instead of fat widgets I'd say. 

    No, that's a bug in IE.  Every other web browser does not do this.  IE should be fixed to respect the HTML standard.

    There is an evil hack for this, just put an IFRAME with the same width and height as the div/layer under the div/layer and it will work... (it will overlay SELECTs which is the only widget that gets painted above objects with a higher z-index).
  • (cs) in reply to loneprogrammer
    paddy:

    No, that violates the HTTP standard!
    Quote RFC 2616:
    13.9 Side Effects of GET and HEAD

    Unless the origin server explicitly prohibits the caching of their
    responses, the application of GET and HEAD methods to any resources
    SHOULD NOT have side effects that would lead to erroneous behavior if
    these responses are taken from a cache.


    I'm not defending using GET in this manner, but for the record, there's no violation of the spec here. "SHOULD NOT" is different from "MUST NOT", and is defined in RFC 2119 thusly:


    4. SHOULD NOT   This phrase, or the phrase "NOT RECOMMENDED" mean that
    there may exist valid reasons in particular circumstances when the
    particular behavior is acceptable or even useful, but the full
    implications should be understood and the case carefully weighed


    I'll be the first to admit that this particular behaviour is not acceptable or even useful, but you get my point, I think.
    before implementing any behavior described with this label.
  • Salagir (unregistered) in reply to Cooper

    That's what looked the best ending to me :)

    Sue google for "non-respect of the conventionnal browsing behaviour".

    People should not be allowed to go on the internetl without JS and cookies ! Theses guys are terrorist ! Hope IE7 won't allow to disable such things.

    "... and then... we .. will... have... peace." (Palpatine)

  • Mike (unregistered) in reply to Omry Yadan

    LOL!

  • (cs) in reply to ammoQ
    ammoQ:
    Whiskey Tango Foxtrot Over:

    I look forward to the day I get a full explaination of your system. You should post it here.

    [;)]



    I've written that system back in 1997 or 1998 and it was one of my first non-hello-world CGI programs. Since it was written in C and CGI libs were sparse then, using virtual paths was slightly easier than parsing a HTTP Get. And beside that spider issue, it worked pretty well. Today, in the age of AJAX, Web 2.0 and what-not, it's obviously an easy laugh (and no longer used).

    I bow to you sir, I have utmost respect for people writing CGI in C (and there still is such kind of people, I know of a forum written in C).

    marvin_rabbit:
    Anonymous:
    Total agreement. And it's not just "destructive" actions, but *any* action which causes a stateful change to the data is supposed to be performed by POSTs. That's sorta the whole point of having two separate types of requests in the first place.

    Not quite anything. For instance, every time your webserver writes to its log file, you're causing a state change. However, such changes are ok because they don't interfere with GET's idempotency: you can do it time after time, and it's the same as doing it once: the extra data doesn't interfere with the running of the app.

    What these idiots were doing, now that is another matter entirely.

    Personally, I'm surprised nobody's brought up the "but form buttons are so ugly" chestnut. I've a way to smack that one down too.

    Ahem: 
    <chestnut>
         "But form buttons are so ugly."
    </chestnut>

    (Just curious.)

    That's where CSS come into play.

    loneprogrammer:
    kipthegreat:

    Come on, having a "view text only" button at the bottom of a page would look retarded.  But having a "?textonly=1" link violates standard.  But an onclick:submit() link works nicely for anyone not trapped in 1998.

    And CSS works nicely for anybody not trapped in 2001.

    Try to understand this -- HTML tags do NOT control what things look like.  CSS controls what things look like.

    E.g. the "ul" tag does NOT mean "make a bulleted list."  It just makes a list.  The CSS attribute "list-style-type: disc" is what makes it have bullets.  If you have a "ul" tag with "list-style-type: upper-roman" then it gets Roman numerals instead of bullets, and if you say "list-style-type: none" then you have a list with no bullets at all!

    ul means "make an unordered list", using roman numerals on an unordered list is proof that you didn't grasp how HTML and CSS work.

    bullseye:

    Hubert Farnsworth:
    You know what? I've got noscript installed in Firefox.

    You are the web-developer's nemesis. [;)]

    He's not.

    makomk:
    You're not - see my sig. It is possible to get to page 2 or more with GET requests; just insert a 2/ (or whatever page number you want) in the path before the post/thread number - e.g. http://thedailywtf.com/forums/2/65974/ShowPost.aspx is the second page of this thread. (There's another method using query strings too, IIRC). Unfortunately, the pager doesn't use this for some reason - pester Alex if you want it fixed...

    Thanks, you rock.

  • (cs) in reply to Hubert Farnsworth
    Hubert Farnsworth:
    your mom:
    what about links such as this?


    You know what? I've got noscript installed in Firefox. Usually, when I encounter submit links that do nothing, I even take the trouble of looking at the source code. Doesn't work because of javascript? There's an easy solution.

    1. click on that little cross on the right top of the window
    2. go to the competitor's website
    That easy.

    You're not missed.

    Seriously, if I were designing a site so that only malcontents with Asperger's were happy, I'd keep your comment in mind. Since no one else cares...

  • (cs) in reply to NancyBoy
    That's where CSS come into play.


    The sadness is that IE does not allow you full control over buttons. It adds about 10px of padding left and right even when padding:0; and you can't remove it.

    This highly annoying annoyance has not been fixed in IE7 Beta2.

    So when I got to this issue, instead of enforcing my stylist ways, I opted to adjust the entire design to allow for a wider button, because reinventing already robust functionality ("submit") using javascript (can be turned off) is a Bad Practice. It falls under Square Wheel Reinvention.

    He's not.


    Is too!

    *hugs my DOM and fuzzy JS library*



    Ok.
    Is not.
  • Random User (unregistered) in reply to ParkinT

    http://www.fhwa.dot.gov/infrastructure/hawaii.htm

  • (cs) in reply to jamesCarr

    jamesCarr:
    paddy:
    gwenhwyfaer:
    bullseye:


    PLEASE don't do this! The number of times I've clicked on what should have been a button on some page http://some.url/page and found myself directed to http://some.url/page/# because onClick events were silently discarded... If it needs to submit a form, make it a submit button. If it needs to execute JavaScript locally, make it an ordinary button. If it's a link, it should take me somewhere AND NOTHING ELSE.



    Then someone did the onClick wrong? 

    I think the old "hyperlinked documents" model is failing to keep up with the fact people are writing more and more complex systems, and by all means, use links forms, or whatever.  Last time I checked, when you show an HTML layer over (for dynamic menus etc) form elements or any embedded element gets painted overtop of the HTML, even though it is on a layer behind it. Thats a pretty good reason to use links instead of fat widgets I'd say. 
    Also, GET requests are good for "modifier" requests at times, do you want a form button just to change a default language or some indescript setting?  To update a hit counter?

    I think its a problem when a programmer has a programming stylistic reason for limiting the freedom of the designers.
    Give them some guidelines, and hard rules where they really are needed, but seriously if a designer can come up with a richer and more appealing interface that use links to submit form posts, there is no reason in the world they should be forced to dumb-down their design.


    No, you're just someone too dumb to understand html. Otherwise you wouldn't refer to them in dreamweaver speak you dumbshit.

    Naturally, "the old hyperlined document model" you refer to, of which I am assuming you mean those things called web standards, works just fine for those who actually now what they are doing, rather than dragging and dropping imaginary "widgets" and "layers" in frontpage.

    I am breaking a "personal rule" to never respond to anyone who uses profanity on this forum.
    However, referring to layers does NOT make one a "dumbshit".  That happens to be a W3C standard.  The effect to which gwenhwyfaer is referencing is quite common.  Anyone who has spent more than (say) 3 hours writing HTML code understands how the browser(s) render Form Elements with a precedence that overrides any attempts to use properly applied (standards-based) CSS techniques.

    This comes from "the voice of experience".  One who has been writing complex HTML/JavaScript/CSS systems in notepad since HTML was at version 0.1

    Try it sometime.  It is good for the sole.

  • (cs) in reply to paddy
    paddy:

    I agree with caching where possible, and of course scalability is very important.  Ironically it would be really nice if web applications were designed more like client-side applications, with adding elements as children of other elements, where the higher level design elements could be left intact while you flipped next/prev through data in whatever field area it is displayed in.  That would make things a lot easier on the server, and it would make it easier to partition out what data is cachable and what is not, and what is actually already on the client.  That is, without resorting to frames of course.


    <Scratches head>

    But we can do that, and have been able to for a long time. People haven't been making all that noise about XMLHttpRequest for no reason you know. And even before that became popular, there were still plenty of ways of doing client-server communication, from hidden IFRAMEs to a JavaScript mechanism for fetching data that I've seen independently invented several time by others too: http://talideon.com/projects/javascript/JSRPC.js

    Where have you been for the past few years? :-)


    I think most web applications are edge case.


    No, by definition they can't be.


    Whether you check your email, bid on auctions, process reports etc, it comes up.


    That's why it's possible for the server to prevent caching of GET where the information sent is one-shot. However, even with progress reports, and checking your email, they still benefit from idempotency by using Cache-Control: private and other mechanisms.

    Now, only an idiot would use an idempotent request to submit a bid on something. Why? Caching. You could click a link to place a bid and get back a cached page from somebody else. That's not what you want happening, and that's one of the reasons HTTP POST exists. It's for submittin non-idempotent, mutating requests. Its responses cannot be cached, so you always get back a fresh page.


    Many news and shopping sites can and do utilize effective caching of course, since they do have high traffic of rather static pages.  I absolutely admit it is important to use it where it is possible.


    You haven't read that article I pointed to on idempotency, have you?


    If the site is a web application that requires the user to register and/or log in, all that traffic will still require each one gets the page generated with a "hello [x]" at the top for that user, and caching will not help.


    Now, let me think... when you log in, what kind of request is that... oh, a HTTP POST! That, and must I point out again that there are differing levels of caching. You can specify public caching where the response will be cached on a public caching proxy, you can specify private caching, where it will be held in your browser cache alone, and you can specify no-cache where it's absolutely positively impossible for the response to be cached anywhere. You can also specify expiration dates on responses, which might range from anything from a few milliseconds to a month and upwards, depending on how fresh the response needs to be.

    How are you not getting this?


    And of  course I agree.  The caching issue came up in context of complex user interfaces, which generally are in complex web applications such as ones that help you check your email, bid on products, or otherwise provide lots of unique data.  In those cases, the user often gets different data off the same URL even a second later.  They are also generally personalized to the user that is logged in, and cannot be reused for others logged in on other accounts.


    You do know the difference between HTTP GET and HTTP POST, right?


    As far as I am concerned, if a designer finds it useful, and does it gracefully, the experience is transparent to the user who only knows they like how the site works.  We can of course disagree.


    The problem here is that it doesn't do it gracefully, that's why it's a nasty hack. If you've two methods of allowing the user to trigger some non-idempotent action, one of which (styling a button in a form performing a HTTP post to look the way the designer intended) isn't a hack, and another (using GET for a non-idempotent request or using a JS link to trigger a form submit) that is a hack, and they both have the same outward appearence, then there's no disagreement here: the former is the correct way to do it and the latter is for somebody who's too lazy to do their job right.

    K.
  • p3n4r (unregistered)

    you really didn't need to say 'whoops' so many times. It made your post worthless.

  • p3n4r (unregistered)

    You said 'whoops' way to many time. it was damn annoying

  • (cs) in reply to paddy
    paddy:
    Also, GET requests are good for "modifier" requests at times, do you want a form button just to change a default language or some indescript setting?


    You'll think otherwise when your proxy changes your input language to Mandarin.

    BTW: HTTP POST can be submitted through a link.

  • (cs) in reply to Keith Gaughan

    Not sure if it was mentioned previously, but this story and TheDailyWTF were "digged" recently:

    http://www.digg.com/technology/Googlebot_destroys_incompetent_company_s_website


  • (cs) in reply to ParkinT
    ParkinT:

    I am breaking a "personal rule" to never respond to anyone who uses profanity on this forum.
    However, referring to layers does NOT make one a "dumbshit".  That happens to be a W3C standard.

    Wrong, there are not "layers" in the W3C specification.

    ParkinT:

    The effect to which gwenhwyfaer is referencing is quite common.  Anyone who has spent more than (say) 3 hours writing HTML code understands how the browser(s) render Form Elements with a precedence that overrides any attempts to use properly applied (standards-based) CSS techniques.

    Wrong again, only Internet Explorer overrides any attempt to set z-index to put other HTML elements on top of form elements, and then again it only does it for a select number of form elements (mainly selects, maybe also textareas). No other browser follows that kind of stupid behavior.

    And it's supposedly been fixed in IE7, too.

  • (cs) in reply to Benjamin McKraken
    Anonymous:
    (Re: http://talideon.com/wiki/?doc=WikiEtiquette)

    It's a form button, huh? Very clever, Lord Smartington Cleverboots of Brainingworth Hall.

    That must be why it looks different from the links (it has a dark rectangle background for me) and acts differently from the links (caption moves when clicked, dotted border) and generally creates a jarring 'is this a button? why is it not like the other links?' feeling.

    Smooth. Real smooth.

    Seriously, if you're going to do something, do it right and test in more than one browser. If you must test in one browser, make it a standards-compliant one.



    Don't be a smart ass. If you're going to complain, be at lease a bit constructive. Tell me what browser you're using, and what platform it's on, give me a screenshot, and I'll do my best to make it work right. You also seem to be missing the bigger point.

    Now, I have three browsers running on my machine: Firefox 1.5.0.1--my primary browser--Opera 8.53--which I find useful for debugging JavaScript--and IE 6.0. Last time I looked the former two were considered standards-compliant, no? And guess what: the page looks just fine in all three. So, smooth, real smooth. You've just made an allegation against me that was unfounded.

    Now, the only browser I know of that acts exactly the way you described is IE. All three put a dotted line around both links and buttons when styled not to have a border. Check. IE is the only one that makes the text move when the button's clicked and there's nothing I can do about it.

    The whole point I was attempting to make was that there's no need to use a HTTP GET to modify application state. That page is made to use a HTTP POST to accomplish the job properly.

    K.

  • whatever (unregistered) in reply to paddy
    paddy:

    I think its a problem when a programmer has a programming stylistic reason for limiting the freedom of the designers.


    I actually think you've got this the wrong way round. I don't think that a programmer should jeopardise security (or standards for that matter) for the sake of the designer's 'freedom'. The case in point is not merely a question of programming style, but a programming error resulting in not only a breach of security, but the loss of large amounts of data. Are you willing to accept these risks for the designer's 'freedom'?

    Should a car be built with aluminium panels in place of the windscreen and windows because the designer thinks it looks better? All elements of work need to be undertaken within a certain level of constraints. Automotive engineers work within constraints, car designers work within constraints, programmers work within constraints and web designers should also be confined by constraints.

    Any designer worth paying should be able to design an attractive, engaging and intuitive design for a web application within the constraints laid out by web standards and the requirements of both the customer and the programming team they are working with.
  • (cs) in reply to masklinn
    masklinn:
    ParkinT:

    The effect to which gwenhwyfaer is referencing is quite common.  Anyone who has spent more than (say) 3 hours writing HTML code understands how the browser(s) render Form Elements with a precedence that overrides any attempts to use properly applied (standards-based) CSS techniques.

    Wrong again, only Internet Explorer overrides any attempt to set z-index to put other HTML elements on top of form elements, and then again it only does it for a select number of form elements (mainly selects, maybe also textareas). No other browser follows that kind of stupid behavior.

    And it's supposedly been fixed in IE7, too.


    And even then, it's only the SELECT element that's affected.

    K.
  • (cs) in reply to p3n4r
    Anonymous:
    You said 'whoops' way to many time. it was damn annoying

    Keep on trying, maybe you'll get it right on the third attempt even :P
  • HR (unregistered) in reply to Keith Gaughan
    Keith Gaughan:

    The whole point I was attempting to make was that there's no need to use a HTTP GET to modify application state. That page is made to use a HTTP POST to accomplish the job properly.


    Of course it does accomplish the job of posting istead of getting, but that isn't the question is it? If it was you wouldn't need to style it like a link.

    The question we should ask ourselves is why use post? I can find three reasons in this thread to not use GET: "Make cache work correctly", but since we can set cache-control and turn off cache anyways it is no longer an issue, "make computer programs suchs as googlebot not visit some pages" which can be done through proper authentication and "because the rfc says so" which is the only valid reson, but as it is only an recommendation we can ignore it when we absolutely have to.

    The user experience should always come in first place and sometimes buttons or javascript just don't fit.

    As a side note: How does buttons and :hover work in IE 5.5/6?
  • (cs) in reply to Keith Gaughan

    One question: How does "not using cookies" allow the spider to bypass the cookie check? Wouldn't this cause the cookie check to always return false, and thus keeping the spider logged out?

    Or is this security implemented in Javascript? Please no...

    ---

    As a side note, it is advisable to turn all forms that make significant changes/execute major actions into POST forms, thus putting them out of reach of any spider, bookmarks, or accidentally pressed links. If this causes "delete page" to require two clicks instead of one - once on the link, once on the submit button of the post form - that is a small price to pay.

    This is because GET requests are considered to be

    safe, i.e. intended only for information retrieval. Unsafe methods (such as POST, PUT and DELETE) should be displayed to the user in a special way (e.g. as buttons rather than links), making the user aware of possible side effect of their actions (e.g. financial transaction).

    In other words, if a rogue GET request can wipe your site, it's your own fault.
  • Spudley (unregistered) in reply to Keith Gaughan
    Far me it from me to point out how utterly retarded it is to (ab)use JS like that to make a link submit a form, whether using the javascript: pseudoprotocol, or on an onclick event. Seriously, people, use a proper friggin' form button! For the sake of a demo, here's a page on my wiki: http://talideon.com/wiki/?doc=WikiEtiquette. See the use edit version link at the bottom? That's no link, it's a form button, and it's a form button because clicking it changes the wiki's view state. See how it looks like the links to each side: CSS.

    Okay. I can agree with you up to a point. However:

    1. Your button doesn't get styled as intended in the Konqueror browser (which is my default).

    2. The button text can't be copied+pasted, and breaks the flow of your text, so if you need to include something like this within your page's body text, it won't look right, and will probably choke search engines.

    3. If you're writing an Ajax app, you're going to be using Javascript anyway, so I don't see anything wrong with an onclick event in that case. (not all onclicks result in a call to the server; eg treeviews and menus just make other elements (dis)appear).

  • (cs) in reply to Keith Gaughan
    Anonymous:

    Give them some guidelines, and hard rules where they really are needed, but seriously if a designer can come up with a richer and more appealing interface that use links to submit form posts, there is no reason in the world they should be forced to dumb-down their design.

    Give me one instance where you think you're right, and I'll show you you're wrong.

    K.



    That's easy.  I need a picture and text together in one solid link.  The text needs to be able to be different languages, so making a solid image with every language is not acceptable.   For example, a big red X next to the word delete. 

    How do I solve it?  Using Ruby on Rails, it's pretty easy.   I use link_to_remote to do a javascript POST  action .  If the user has ajax capability, it updates the page dynamically, otherwise it redirects to a new page load.

    If the user doesn't have javascript, it triggers the same URL via GET, but instead gets a confirmation page with the data in a form and a regular submit button. (Which submits back to the same action as before)
     
  • (cs)

    After all was said and done, Josh was able to restore a fairly older version of the site from backups. He brought up the root cause -- that security could be beaten by disabiling cookies and javascript -- but management didn't quite see what was wrong with that. Instead, they told the client to NEVER copy paste content from other pages.


    When I read things like these I seriously consider abandoning my job and start swine herding instead. It's hard work but you get much more intelligent clients.
  • (cs)

    I'm new here, came via Digg or some other RSS feed.

    Anyhow, I have a few questions:

    1) Would this all have been avoided if they used HTTP Authentication? I know it's not entirely secure, but correct me if I'm wrong: robots can't view anything in a pw protected dir since they're not authenticated?

    2) Does anyone have any suggestions on validating form input. I am currently using a onClick="checkform();" type of javascript, and I realize this isn't ideal. Should I send all information via POST to a script that checks it and re-writes header data based on if it's cool or not?

    Thanks.

  • (cs) in reply to HR
    Anonymous:
    Keith Gaughan:

    The whole point I was attempting to make was that there's no need to use a HTTP GET to modify application state. That page is made to use a HTTP POST to accomplish the job properly.


    Of course it does accomplish the job of posting istead of getting, but that isn't the question is it? If it was you wouldn't need to style it like a link.


    No, it is the question. If it makes a non-idempotent state change on the server, you ought to use a HTTP POST. Really: http://www.dehora.net/journal/2006/03/oh_well.html, and follow the links.

    If it were just getting and the request was idempotent, a link is just fine. That's what it's for. This is Distributed Systems 101 here.


    The question we should ask ourselves is why use post? I can find three reasons in this thread to not use GET: "Make cache work correctly", but since we can set cache-control and turn off cache anyways it is no longer an issue,


    Pragma's been around since HTTP/1.0, and Cache-Control was introduced in HTTP/1.1, so there hasn't been a problem with this since '96. That's not the issue though.


    "make computer programs suchs as googlebot not visit some pages" which can be done through proper authentication


    That's a valid reason to use proper authorisation and authentication, not a reason not to use HTTP GET.


    and "because the rfc says so" which is the only valid reson, but as it is only an recommendation we can ignore it when we absolutely have to.


    Um, no. It's not a recommendation, it's an actual standard. HTML 4.0 is a recommendation because the W3C, but you'll find the HTTP and HML 3.2 were ratified by an actual standards body, the IETF.

    That, and if you ignore the spec, you end up incompatible with everything else out there. A protocol is a shared agreement between you and your peers implementing that protocol. If your app breaks because you decide to ignore the protocol, it's your fault, not theirs.

    The user experience should always come in first place and sometimes buttons or javascript just don't fit.

    Heck, TCP and IP don't work quite the way I'd like sometimes. Maybe I can ignore their specs and do what I like. No, wait, then I wouldn't be able to communicate effectively with the rest of the world.


    As a side note: How does buttons and :hover work in IE 5.5/6?


    It's broken. If you really, really, really need to emulate :hover on buttons in IE, JS is fine for that. But do it cleanly with something like Dean Edward's IE7 patch. But that's seperate matter, its lack doesn't break anything, and is completely tangental to the thread.

    K.
  • coglethorpe (unregistered) in reply to Whiskey Tango Foxtrot? Over.

    "He brought up the root cause -- that security could be beaten by disabiling cookies and javascript -- but management didn't quite see what was wrong with that. Instead, they told the client to NEVER copy paste content from other pages."

    Granted, the authentication bug was a big deal, but the real problem was management's non-solution to the problem.

Leave a comment on “The Spider of Doom”

Log In or post as a guest

Replying to comment #:

« Return to Article