- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
I quoted the wrong post and fucked up the quote tags. Sorry about that. The quote was actually from Marnen Laibow-Koser.
Admin
Admin
Admin
Have you stopped beating your wife yet?
This is roughly what you're asking there [0]. The fact that XML needs an additional memory- and CPU-intensive processing step just to remove the bloat is an indication of its failure as an encoding format. ASN.1 (and other similar formats) don't need gzip because they're already a pretty compact encoding, and so compressing it doesn't achieve much. Alternatively, you could regard its compression cost as having zero CPU overhead and zero memory overhead.
[0] Actually that's not a very good analogy. Raymond Chen had one in his blog a couple of years ago but I can't remember it at the moment. A variation was a certain large company in the 1980s whose software was incredibly buggy and unreliable and so they made a big deal out of their crash-logging capabilities, which none of their competitors had, so obviously their software was better.
(Since their competitors' software rarely, if ever, crashed, none of them needed this kludge in the first place, but the first company trumpeted it as a feature of their product when in fact it was only necessary because they were shipping crap to their customers).
Admin
GZIP is memory and CPU intensive? Our web server farm serves up our HTML GZip'ed without breaking a sweat. And browsers unzip it automatically literally bajillions of times a day. And again - the original comment was "my WEBSITE pushes a GB of traffic a day so I don't want to use JSON (because I have somehow mixed that up with XML)". How the f do you use ASN.1 on a WEBSITE in place of JSON?
Admin
I've once had the pleasure of writing a BER decoder in C on Unix, and it uses some nice features like mmap to pretend the entire file is a block of memory. It does not have buffer overflows, although it took a while before all bugs were removed (especially with the indefinite length records).
Ah yes, that's another thing: parsing BER in Java is a bitch because of Java's lack of unsigned integers.
But whilst you can easily write a universal BER decoder, what you end up with is a list of 'the value with index 13 is -1' or 'the value with index 42 is "share and enjoy"'. If you want to know what indexes 13 and 42 stand for, you need the ASN.1 file, which is not the easiest to parse. I've looked at available libraries, but wasn't impressed.
So yes, the combination of ASN.1 and some encoding rules is very flexible, but flexibility is not necessarily always a good thing.
Admin
So I didn't remember when GZIP became standardized. Looked it up, and it was in HTTP/1.1.
http://developer.yahoo.com/performance/rules.html:
So JSON adds maybe 10% to the raw text, in order to allow you to send programming objects across the wire to the browser. But when GZIPed, you only end up sending about 30% of that total size. So you end up sending across maybe 33% of the size of the raw text to almost all browsers. Again, this is for sending actual programming objects across the wire, not just pretty-printed data.
I should mention that when our shop does .net we use object serialization there as well to send objects across the wire. The .net serializer uses xml for objects. More bloat, but then again the objects are more complex. How many times has .net xml serialization been the cause of a site performance or network performance problem for us? Zero.
Admin
Not necessarily. But even if so, put "success" first and "redirect_to" second and it fails ... which is the same JSON structure. Not using tools where they belong is TRWTF.