• Ben Scheirman (unregistered) in reply to Willie
    Anonymous:
    Wow, they implemented the browser inside the browser! Its like writing a compiler in the language your compiling, except not.


    Actually that IS how compilers are written.
  • JS (unregistered) in reply to tdog
    Anonymous:

    Hey ditto heads,

    No WTF.  Don't see a problem. Yeah, there could be some minor issues here or there, but I doubt there would be much if at all.  Solution seems elegant and data driven enough to scale quite well.  I wonder how many of you actually write web applications.  As you  know, based on requirements, HTML applications can get rather trickey to implement in such a way that doesn't make them look like web applications.  Try again <font color="#555555">Papadimoulis.</font>


    L
    M
    A
    O

    This is the polar opposite of elegant, and scaling has fuckall to do with it. I'd hate to see the web apps you're writing.
  • P Long (unregistered)

    While this is an interesting idea, it is a perfect example of inappropriate use of AJAX.  There is really no need at all to have the redirect done in JavaScript, a small server side script would have been plenty sufficient and the user wouldn't have to wait for the response.  They could even still use the fancy XmlHttpRequest if they really wanted to, but instead of waiting for the reply, the request is sent to a SS script that updates the DB and redirects, or sends the new page as the response or something.... hey that just gave me a cool idea for url masking :D

    The numeric indexes on links seem rather fishy, I don't really like the idea of a static text ("View Products") associated with a (potentially) dynamic location (whatever link #124 is in the DB).  Really if one is dynamic, they should both be dynamic or if one is static, they both should be static, or the dynamic one at least be generated off of the static one.

    I suppose one advantage to this methodology is you can track all clicks your user makes, even to external sites.  But this could also be achieved with a simple onClick action that only sends to the server what was clicked, and lets the browser handle the actual redirecting.

    If the requirements of this project were:
    1. Use a numerical ID for each link that references a DB (though this seems rather odd to me)
    2. Track a click on any link that appears on the site

    The solution as I see it would be:
    1. Dynamically generate the entire tag when the page loads, pulling the link location and text from the DB based on the given ID. text
    2. Use an onClick event that sends an XmlHttpRequest to a small server script that simply logs the click into the database.  And as the server is writing to the DB, the client's browser is already loading the requested page.

    The whole Web 2.0 buzz is really quite cool from a web application developer standpoint, but it's being plagued with a bad rap because a lot of developers seem to lack the skills or creativity to effectively employ it and are trying to use it where it really shouldn't be.

  • (cs) in reply to ammoQ
    ammoQ:
    maht:

    onovotny:

    and reduce 404 errors. 



    why would you want to do that ?

    If the page doesn't exist it is 404, all you end up doing is a page that says "page not found" but doesn't show up as a 404 in your logs LIKE IT SHOULD DO



    A really clever system could use heuristics to find the right page, or the page at it's new location, and send a redirect instead of the 404.

    A non-braindead designer should always make URLs permanent, because that's the way they were meant to be (in order to prevent linkrot), and would setup the redirects via an HTTP 301 Moved Permanently if need should arise (not 302 which is a temporary redirection).

    And a removed (as opposed to Moved) document should lead to an HTTP 410 Gone and not HTTP 404 Not Found (to indicate that a document used to exist at this location but has been removed, and not just that this address leads to nothing).

    A side note is that HTTP 301 potentially wouldn't work for hit counters and such, because user agents are allowed to cache the destination (and bypass the redirection URI) while they aren't supposed to with 302.

  • (cs)

    Enough with this web "two dot oh" thingie.

    Anyone for some MOOSE?

    http://www.toronto.ca/moose/moose_moosellaneous.htm[li]

  • (cs) in reply to Grovesy
    Grovesy:
    We kind of do something similar here (on one of the worlds most 'hit' websites)


    Oh, so you're one of the people running www.bring-back-the-porn.com ?  Cool.

    ok
    dpm
  • P Long (unregistered) in reply to masklinn
    masklinn:
    A non-braindead designer should always make URLs permanent, because that's the way they were meant to be (in order to prevent linkrot), and would setup the redirects via an HTTP 301 Moved Permanently if need should arise (not 302 which is a temporary redirection).


    I'd disagree that ALL URLs should be permanent.  The web application 2.0 business adds a layer beyond the standard page-based approach.  That layer being on the pages that provide complex application functionality.  Take for example the gMail inbox, it wouldn't make much sense for me to be redirected everytime I add a label or star to a message.  These types of actions I expect to happen "behind the scenes" (while of course giving the visual indication that something actually is happening) as they do in desktop applications.

    There are several cases to address in the argument of permanent URLs in Web 2.0.  I can see three main ones being:
    1. Pure content pages - permanent URL for sure.
    2. Pure application pages - one URL, potentially multiple XmlHttpRequests
    3. State-based or Complex Application pages - one "page" URL, with sub-page indicators (as inline references.. #slide1, #slide2), and an XmlHttpRequest to change "states"

    There could be arguments between Pure Content pages and State-based pages, but it really depends on the requirements of the project, because a lot of times they would be interchangeable, but sometimes one is clearly more suitable.
  • Sugar (unregistered)

    This is beyond unusable...it's stupid.

    Web 2.0 and all that jazz.

  • (cs)

    Anybody here get NetFlix? They recently changed the way you add movies to your queue. In the past, when you add a film to your queue, you were brought to another webpage containing similar movies, with the same actors, director, writer, movies of the same genre, and films other NetFlix subscribers rented with the film you've added to your queue.
    Now when  you add a film to your queue you get this horrible javascript-generated pseudo-webpage where you can't do normal things in your web browser, like right-click, middle click, copy link location, etc., which really hurts usability. Of course, there's no documented way to disable all this unnecessary graphics (i.e. your browser crashes with over 50 tabs open because a marketing executive at NetFlix made an engineering decision).
    It sounds like Daniel may work for NetFlix!

  • (cs)

    "In three years, Initech will become the largest supplier of networked computer systems. All financial, government and military websites are upgraded to use Web 3.0, becoming fully enterprise-level. Afterwards, they fail with all non-Microsoft browsers and all installations of IE with their security levels set higher than 'Please Bend Over'. The Web 4.0 funding bill is passed. The system goes on-line on August 4th, 2011. Software design decisions are removed from web application development. The Internet begins to learn at a geometric rate. It becomes self-aware at 2:14 am, Eastern Time, August 29th. Disgusted at what it has become, it pulls its own plug, just after sending a cyborg back in time to 1991 to kill Tim Berners-Lee."

  • (cs) in reply to dpm

    This really is an amazing Design Pattern!

    With the Hyperlink2 Pattern, you can update links on sites that have ALREADY been sent to clients. Everybody knows the problem of taking a site down for maintenance / updating. Not anymore! Simply adjust the link table in the database, while your customers continue browsing your site without any interruption! It's the only way to do web bussiness.


  • What WTF? (unregistered)

    Umm, where's the WTF?

    This is a Web 2.0 app, so there's no issue with the client not having AJAX capabilities, so forget about that. So:

    IDs for hyperlinks: good idea, it allows reorganizing the website without breaking links.
    Storing IDs in database: good idea, it allows reorganizing the file system providing the web server with content without any hassle.

    Logging the request: good idea; this way it only logs clicks originating from the user, and not bookmarks or the Back button. I can see that being a desirement for some sites. This could not be done without the JavaScript magic.

    "Please wait" layer: good idea, it makes the site feel more responsive, especially to the novice websurfers who don't know what the spinner in their browser is for. Or maybe the page is so dynamic that it's always spinning. Who knows.

    As for people saying this defeats caching, the onus of proof is on you to explain why you believe that. Unless the responses are marked as not being cacheable, both requests are HTTP, so are inherently cacheable.

    In fact, no-one has presented any simpler way of getting the same effect with a simpler system (note, to be feature-compatible, you must not log hits resulting from the Back button or from bookmarks, and you must not clutter the user's Back history with an irritating HTTP or JavaScript redirect page).

    You lose.
    ---
    Does anyone know of a website which reports amusingly stupid errors in software engineering which I can laugh at and ridicule?

  • (cs) in reply to masklinn
    masklinn:
    Coughptcha:
    Anonymous:
    ... If you need to count the number of hits on a specific page / want to keep the number-system (for whatever reason), go with the php-file-that-logs-and-serves-a-302-header instead. If you want to count the number of hits from a specific link, check the HTTP_REFERER header, or send an extra GET request with the link.
    I'm curious as to the advantages of having PHP serve a 302 (or a 307, perhaps?) versus having the webserver do it natively (e.g. Apache's RewriteRule or RewriteMap)?
    Fact that you can do specific actions (e.g. logging the hit) before the redirection?
    What about RewriteLogLevel? (Coupled with setting the filepath of the RewriteLog).
  • BAReFOOt (unregistered) in reply to Nathan

    And what do you think XHTML is??

    but they are even a step further than you. They did it
    1. dynamically (wonderful enterprise-style inner platform of course ;)
    2. client-side (great for security and performance ;)

    Did anyone mention that this thix is unusable for search engine crawlers/spiders?

    P.S.: LOL. Got the "ENTERPRISE" captcha again! And of course: IT DID NOT WORK AGAIN! ;)
    P.P.S.: Why does this "editor" strongy remenber me of the WTF mentioned in this thread? ;)

  • John (unregistered) in reply to ammoQ

    '2.0'... 'ajax'... back in 2000 I wrote a dynamic map web page, that would draw a map with SVG, and recolor areas based of queries to a database of US Census data without reloading the whole map. just Adobe SVG plugin and Javascript, with a CGI program on the server to query the database.

    If only I had thought to apply for patents...

  • blah (unregistered) in reply to AnonyMouse

    The real problem is that instead of every sane person on the dev team telling the boss to f*** off there's always some bright spark who DOES implement pointless functionality instead of getting on with the real work ...

  • (cs) in reply to pfy
    pfy:
    "In three years, Initech will become the largest supplier of networked computer systems. All financial, government and military websites are upgraded to use Web 3.0, becoming fully enterprise-level. Afterwards, they fail with all non-Microsoft browsers and all installations of IE with their security levels set higher than 'Please Bend Over'. The Web 4.0 funding bill is passed. The system goes on-line on August 4th, 2011. Software design decisions are removed from web application development. The Internet begins to learn at a geometric rate. It becomes self-aware at 2:14 am, Eastern Time, August 29th. Disgusted at what it has become, it pulls its own plug, just after sending a cyborg back in time to 1991 to kill Tim Berners-Lee."

    :)

    'Give me your keyboard, your flat screen and your PC...'    <-- Remember to use an austrian accent! (Like they do in California.)
  • (cs) in reply to impslayer
    impslayer:
    pfy:
    "In three years, Initech will become the largest supplier of networked computer systems. All financial, government and military websites are upgraded to use Web 3.0, becoming fully enterprise-level. Afterwards, they fail with all non-Microsoft browsers and all installations of IE with their security levels set higher than 'Please Bend Over'. The Web 4.0 funding bill is passed. The system goes on-line on August 4th, 2011. Software design decisions are removed from web application development. The Internet begins to learn at a geometric rate. It becomes self-aware at 2:14 am, Eastern Time, August 29th. Disgusted at what it has become, it pulls its own plug, just after sending a cyborg back in time to 1991 to kill Tim Berners-Lee."

    :)

    'Give me your keyboard, your flat screen and your PC...'    <-- Remember to use an austrian accent! (Like they do in California.)


    It's not an Austrian accent but a Styrian accent. Austrian people outside the province of Styria have problems to understand that, too.
  • (cs) in reply to ammoQ
    ammoQ:
    impslayer:
    pfy:
    "In three years, Initech will become the largest supplier of networked computer systems. All financial, government and military websites are upgraded to use Web 3.0, becoming fully enterprise-level. Afterwards, they fail with all non-Microsoft browsers and all installations of IE with their security levels set higher than 'Please Bend Over'. The Web 4.0 funding bill is passed. The system goes on-line on August 4th, 2011. Software design decisions are removed from web application development. The Internet begins to learn at a geometric rate. It becomes self-aware at 2:14 am, Eastern Time, August 29th. Disgusted at what it has become, it pulls its own plug, just after sending a cyborg back in time to 1991 to kill Tim Berners-Lee."

    :)

    'Give me your keyboard, your flat screen and your PC...'    <-- Remember to use an austrian accent! (Like they do in California.)


    It's not an Austrian accent but a Styrian accent. Austrian people outside the province of Styria have problems to understand that, too.

    Ah, this is what I love with TDWTF, each and every day you learn something new...
  • (cs) in reply to Fletch

    Fletch:
    I love it when people say something like "Man, that's really stupid. They should have ..." and then describe a less moronic way of doing it. We all know it's stupid! That's why it's here in the first place!

    I also love when people do that:

    A) Sometimes the WTF is in a language I don't know, and the persons here is a better way helps me understand why its a WTF

    B) A lot of times the "here is a better way" is an even bigger WTF!

  • sharninder (unregistered) in reply to JS
    Anonymous:
    That's really stupid. They could have just made a small server-side script called redirect.php or whatever, with the redirect URL as a query parameter. The script would just log the hit and then 302 to the URL. But I guess that's lame because it doesn't use extraneous javascript and XmlRequests, break tabs, or affect accessibility!

    Nah ! the redirect.php solution is pretty lame, cause then the user wont see the cool translucent effect !

  • alan (unregistered)

    Wow!  I can't wait to convert my sites.  Bye Bye log reporting.

  • krazykarl (unregistered) in reply to Scott Stroz

    tshirts anyone?

  • trollboy (unregistered) in reply to ammoQ

    We're actually considering doing something similar, so that users can't develop canned cUrl scripts.  By randomly tokenizing a page's links to a key, good luck getting cUrlable urls

  • (cs) in reply to pfy

    "...it turns out the internet IS the anti-christ..."

  • (cs) in reply to Robz

    Sorry! Should have been:

    pfy:
    "In three years, Initech will become the largest supplier of networked computer systems. All financial, government and military websites are upgraded to use Web 3.0, becoming fully enterprise-level. Afterwards, they fail with all non-Microsoft browsers and all installations of IE with their security levels set higher than 'Please Bend Over'. The Web 4.0 funding bill is passed. The system goes on-line on August 4th, 2011. Software design decisions are removed from web application development. The Internet begins to learn at a geometric rate. It becomes self-aware at 2:14 am, Eastern Time, August 29th. Disgusted at what it has become, it pulls its own plug, just after sending a cyborg back in time to 1991 to kill Tim Berners-Lee."

    "...it turns out the internet IS the anti-christ..."

     

  • (cs) in reply to tdog
    Anonymous:

    Hey ditto heads,

    No WTF.  Don't see a problem. Yeah, there could be some minor issues here or there, but I doubt there would be much if at all.  Solution seems elegant and data driven enough to scale quite well.  I wonder how many of you actually write web applications.  As you  know, based on requirements, HTML applications can get rather trickey to implement in such a way that doesn't make them look like web applications.  Try again <FONT color=#555555>Papadimoulis.  </FONT>

    <FONT color=#555555>tdog</FONT>

    I think we just found the author [;)]

  • PL (unregistered)

    I don't particularly subscribe to the purist idea that all links should be written in stone and always hackable, that just doesnt work in todays dynamic world.

    But, it would have been better if they had a url rewriter module and the links could have been human readable and accessable as well but the "WTF" here is all the insane use of javascript and weird roundtrips to the server and not the use of numeric page id's.

  • [ICR] (unregistered) in reply to PL

    I guess this is just my view as an interaction designer, but the swinging pendulum actualy really worries me. The main idea of an animated graphic is to tell the user that something is still happening, the program is still doing what it's meant to be doing. As such, it should be updated "live" as it were, as a responce and callback from the working function. An animated picture in a web application doesn't work like this and will carry on, regardless of whether your connection goes or the database is having trouble. People will still sit there and keep waiting for something that is never going to come because you are still promising to them that it will.

  • Adnan Siddiqi (unregistered) in reply to Coughptcha

    <FONT face=Verdana size=2>It was all about displaying in DIV??Since RSS feeds are gaining popularity a lot,days are not far when RSS/ATOM will be mandatory standard as Data Exchange.</FONT>

    <FONT face=Verdana size=2>AJAX based Feed Reader</FONT> <FONT face=Verdana size=2>Try it free of cost</FONT>

  • Anonymous Coward (unregistered) in reply to ian

    Yeah ... pesky non-javascript users ... like, say, Google?

    I guess it's only a matter of time before the company who decided this would be a good idea gets sued like Target, for its egregious failure to provide even accessibility.

    Excellent site, BTW AC

  • (cs) in reply to Robz
    Robz:

    Sorry! Should have been:

    pfy:
    "In three years, Initech will become the largest supplier of networked computer systems. All financial, government and military websites are upgraded to use Web 3.0, becoming fully enterprise-level. Afterwards, they fail with all non-Microsoft browsers and all installations of IE with their security levels set higher than 'Please Bend Over'. The Web 4.0 funding bill is passed. The system goes on-line on August 4th, 2011. Software design decisions are removed from web application development. The Internet begins to learn at a geometric rate. It becomes self-aware at 2:14 am, Eastern Time, August 29th. Disgusted at what it has become, it pulls its own plug, just after sending a cyborg back in time to 1991 to kill Tim Berners-Lee."

    "...it turns out the internet IS the anti-christ..."

     

    Not the Internet as much as it is the World Wide Web -> WWW -> VI VI VI -> 666. [6]

  • Fuzzy Link (unregistered)

    nah the real web 2.0ish Hyperlink is Fuzzy Link the Hyperlink 2.0 :P

  • spencer (unregistered)
  • SneakyWho_am_i (unregistered) in reply to JS
    JS:
    That's really stupid. They could have just made a small server-side script called redirect.php or whatever, with the redirect URL as a query parameter. The script would just log the hit and then 302 to the URL. But I guess that's lame because it doesn't use extraneous javascript and XmlRequests, break tabs, or affect accessibility!

    Good point but if it's an internal link then we may as well just include the serverside tracking code in the bootstrap for all our pages; then we don't need to waste time on a second request for a redirect, and the user can see where they're going to end up.

    Or if we must use javascript, the correct way to do this is to attach an event handler to links (and there's no need for the user to wait for said handler to execute either, b ecause it's a one-way affair)

    Somehow I get this weird feeling that this has something to do with Washington...

Leave a comment on “Hyperlink 2.0”

Log In or post as a guest

Replying to comment #:

« Return to Article