• (cs) in reply to L.
    L.:
    Abso:
    L.:
    [JavaScript has] been too often used as an extension of HTML, or as an inferior version of Java, rather than the decent language in its own right that it actually is[...]

    me blathering

    Nice fake quote .. A post by me saying javascript is not actually a piece of crap ? impossibru !

    I quoted the wrong post and fucked up the quote tags. Sorry about that. The quote was actually from Marnen Laibow-Koser.

  • (cs) in reply to Bobby Tables
    Bobby Tables:
    Not sure if troll, or just stupid.
    I'm not sure either, but my guess is that you're stupid.
    But the winter solstice is the 1st day of winter.
    Sure, if you insist on the Persian calendar. And yes, many people from European and Euro-derived cultures will also tell you the winter solstice day - particularly TV weather reporters and similar would-be pedants - is the first day of winter. But there's no law that makes it so. "Winter" is a cultural concept, not an astronomical one, and there is no official standard for when it begins and ends.
    http://en.wikipedia.org/wiki/Winter_solstice
    And that article does not support your claim (except in that it says some people call the solstice day "the first day of winter", and a mention of the Persian calendar.
  • (cs) in reply to MichaelWojcik
    MichaelWojcik:
    "Winter" is a cultural concept, not an astronomical one, and there is no official standard for when it begins and ends.
    Am I missing something? I mean, what's that whooshing sound?
  • Dave (unregistered) in reply to Brian White
    Brian White:
    I have never used ASN.1. Does it have minification and gzip compression tools available if the issue is data bandwidth?

    Have you stopped beating your wife yet?

    This is roughly what you're asking there [0]. The fact that XML needs an additional memory- and CPU-intensive processing step just to remove the bloat is an indication of its failure as an encoding format. ASN.1 (and other similar formats) don't need gzip because they're already a pretty compact encoding, and so compressing it doesn't achieve much. Alternatively, you could regard its compression cost as having zero CPU overhead and zero memory overhead.

    [0] Actually that's not a very good analogy. Raymond Chen had one in his blog a couple of years ago but I can't remember it at the moment. A variation was a certain large company in the 1980s whose software was incredibly buggy and unreliable and so they made a big deal out of their crash-logging capabilities, which none of their competitors had, so obviously their software was better.

    (Since their competitors' software rarely, if ever, crashed, none of them needed this kludge in the first place, but the first company trumpeted it as a feature of their product when in fact it was only necessary because they were shipping crap to their customers).

  • Brian White (unregistered) in reply to Dave
    Dave:
    Brian White:
    I have never used ASN.1. Does it have minification and gzip compression tools available if the issue is data bandwidth?

    Have you stopped beating your wife yet?

    This is roughly what you're asking there [0]. The fact that XML needs an additional memory- and CPU-intensive processing step just to remove the bloat is an indication of its failure as an encoding format. ASN.1 (and other similar formats) don't need gzip because they're already a pretty compact encoding, and so compressing it doesn't achieve much. Alternatively, you could regard its compression cost as having zero CPU overhead and zero memory overhead.

    [0] Actually that's not a very good analogy. Raymond Chen had one in his blog a couple of years ago but I can't remember it at the moment. A variation was a certain large company in the 1980s whose software was incredibly buggy and unreliable and so they made a big deal out of their crash-logging capabilities, which none of their competitors had, so obviously their software was better.

    (Since their competitors' software rarely, if ever, crashed, none of them needed this kludge in the first place, but the first company trumpeted it as a feature of their product when in fact it was only necessary because they were shipping crap to their customers).

    GZIP is memory and CPU intensive? Our web server farm serves up our HTML GZip'ed without breaking a sweat. And browsers unzip it automatically literally bajillions of times a day. And again - the original comment was "my WEBSITE pushes a GB of traffic a day so I don't want to use JSON (because I have somehow mixed that up with XML)". How the f do you use ASN.1 on a WEBSITE in place of JSON?

  • (cs)
    No data interchange format is perfect. ASN.1 (Abstract Syntax Notation One) is probably the closest (it's by far the most flexible and comprehensive) [...]
    The real WTF is claiming that ASN.1 is a data interchange format. It isn't. It's a meta-language that describes the contents of a binary file, in BER, CER, DER or (hysterical laugh) XER. In that sense, it's more comparable to XML Schema.

    I've once had the pleasure of writing a BER decoder in C on Unix, and it uses some nice features like mmap to pretend the entire file is a block of memory. It does not have buffer overflows, although it took a while before all bugs were removed (especially with the indefinite length records).

    Ah yes, that's another thing: parsing BER in Java is a bitch because of Java's lack of unsigned integers.

    But whilst you can easily write a universal BER decoder, what you end up with is a list of 'the value with index 13 is -1' or 'the value with index 42 is "share and enjoy"'. If you want to know what indexes 13 and 42 stand for, you need the ASN.1 file, which is not the easiest to parse. I've looked at available libraries, but wasn't impressed.

    So yes, the combination of ASN.1 and some encoding rules is very flexible, but flexibility is not necessarily always a good thing.

  • Brian White (unregistered) in reply to Brian White
    Brian White:
    Dave:
    Brian White:
    I have never used ASN.1. Does it have minification and gzip compression tools available if the issue is data bandwidth?

    Have you stopped beating your wife yet?

    This is roughly what you're asking there [0]. The fact that XML needs an additional memory- and CPU-intensive processing step just to remove the bloat is an indication of its failure as an encoding format. ASN.1 (and other similar formats) don't need gzip because they're already a pretty compact encoding, and so compressing it doesn't achieve much. Alternatively, you could regard its compression cost as having zero CPU overhead and zero memory overhead.

    [0] Actually that's not a very good analogy. Raymond Chen had one in his blog a couple of years ago but I can't remember it at the moment. A variation was a certain large company in the 1980s whose software was incredibly buggy and unreliable and so they made a big deal out of their crash-logging capabilities, which none of their competitors had, so obviously their software was better.

    (Since their competitors' software rarely, if ever, crashed, none of them needed this kludge in the first place, but the first company trumpeted it as a feature of their product when in fact it was only necessary because they were shipping crap to their customers).

    GZIP is memory and CPU intensive? Our web server farm serves up our HTML GZip'ed without breaking a sweat. And browsers unzip it automatically literally bajillions of times a day. And again - the original comment was "my WEBSITE pushes a GB of traffic a day so I don't want to use JSON (because I have somehow mixed that up with XML)". How the f do you use ASN.1 on a WEBSITE in place of JSON?

    So I didn't remember when GZIP became standardized. Looked it up, and it was in HTTP/1.1.

    http://developer.yahoo.com/performance/rules.html:

    Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request. Accept-Encoding: gzip, deflate

    If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response. Content-Encoding: gzip

    Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you're likely to see is deflate, but it's less effective and less popular. Gzipping generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate. There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically. Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It's also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it's worthwhile to compress any text response including XML and JSON.

    So JSON adds maybe 10% to the raw text, in order to allow you to send programming objects across the wire to the browser. But when GZIPed, you only end up sending about 30% of that total size. So you end up sending across maybe 33% of the size of the raw text to almost all browsers. Again, this is for sending actual programming objects across the wire, not just pretty-printed data.

    I should mention that when our shop does .net we use object serialization there as well to send objects across the wire. The .net serializer uses xml for objects. More bloat, but then again the objects are more complex. How many times has .net xml serialization been the cause of a site performance or network performance problem for us? Zero.

  • RodMan (unregistered) in reply to Martijn Otto
    Martijn Otto:
    The biggest WTF is of course that a space was added after the {. This means the JSON was created manually, instead of using one of the myriad available functions to do exactly that.

    Not necessarily. But even if so, put "success" first and "redirect_to" second and it fails ... which is the same JSON structure. Not using tools where they belong is TRWTF.

Leave a comment on “JavaScript JSON Parsing”

Log In or post as a guest

Replying to comment #:

« Return to Article