• SurfMan (unregistered) in reply to boa13

    Anonymous:
    DiamondDave:
    So, why do people keep bashing XML and not giving a good reason why they think it sucks?

    The only reason I think xml gets a bad rap is because of people who misuse it, like the author of today's wtf.  Don't bash something if you can't back up your opinion...


    XML is like violence: if it doesn't work, use more.

     

    If you steal funny quotes, at least mention the source: http://www.bash.org/?546813

  • zootm (unregistered) in reply to nationElectric
    nationElectric:
    People hate XML because it isn't a rigorous standard, but they haven't heard of any of the higher-level standards that are implemented *in* XML. People hate XML because the entire stream has be read before they can do anything with it, but they've never heard of SAX. People hate XML because they don't like implementing schemas as DTDs, but they've never heard of XSD. People think XML is ridiculous because it's "BASED OFF OF HTML", but they've apparently never heard of SGML.

    The WTF here is that people are shooting their mouths off about technologies they clearly don't understand.

    XML, in and of itself, is a simply, loose, generic, open standard that other standards can be implemented upon. Yeah, if you're Billy Joe Bob implementing BJBML, you're not going to get much benefit in the way of interop, granted. But if you're the W3C, or a coalition of companies involving, say, IBM or Adobe, it makes a lot of sense to base your standard on XML. You get an established syntax, which means you already a bunch of parsers, validators, apis, editors, and more to deal with your data. And, because of all that, converting data into or out of your format from other XML-based formats is relatively simple. Hell, even if you are Billy Joe Bob, you still get all those advantages with your format -- you just don't get the audience, unless your standard is pretty good and catches on.

    This is why something like SVG is cool even though OMFG IT'S TEH XML!!! You can pull data from somewhere in XML, run it through XSL to merge it with SVG you've written, and dynamically generate complex graphics, animations, or UIs. That can be extended even further if you're using those graphics within, say, a DocBook or DITA project. Your data is cleanly separated from your business and presentation logic, you only need a small toolset, all of it open-source, with a consistent underlying syntax, and it will work on almost every platform you can imagine. And you can even process it in a SAX event pipeline.

    Of course, it's not perfect. Like any other technology, it has its limitations and there are plenty of ways to abuse it. In particular, it's not great for encapsulating a lot of binary data, and it can sometimes be overkill if you're working with a pretty flat data structure that's not going to be used outside of your application (for example, a java properties file.) And, like in today's example, it's completely pointless if you're just going to embed giant CDATA blocks in it.

    As far as the syntax, yeah, it's wordy. I'd imagine that most programmers here can appreciate the value in being able to edit XML as plaintext, and that wordiness can be useful in keeping things straight for the degree of nesting that can happen when you're trying to model even moderately complex data in a tree structure. If you don't like that, though, there are plenty of editors out there that will streamline things for you. And yeah, XML isn't lean, but if all we wanted was lean we'd be writing everything in assembler. Portability costs, deal with it. Storage space is cheap, memory is cheap, processors are cheap, bandwidth is getting pretty cheap. Parsing, storing, and transmitting XML are generally pretty painless here in the 21st century, unless you're dealing with truly massive amounts of data, in which case you should be using an XML store to deal with all the pain for you.


    applause

    Thank you, I was worried I was going to have to post that.

    One interesting quote though: "So the essence of XML is this: the problem it solves is not hard, and it does not solve the problem well." -- Jerome Simeon and Philip Wadler

    The point, however, is that it does solve the problem, and we don't have another standard system that does in any way that's even close to acceptable. SGML markup is a little expansive, sure, but it's the best we have, at present, and it's standard, meaning you don't have to deal with the markup since the tool libraries are already there, written for you.

  • (cs) in reply to Phil

    You guys would all do the fox if you had the chance, hair, fire, etc. be damned.

    Now that we've determined that XML sucks, why don't we have the element versus attribute debate?

  • (cs) in reply to JohnO
    JohnO:

    You guys would all do the fox if you had the chance, hair, fire, etc. be damned.

    Now that we've determined that XML sucks, why don't we have the element versus attribute debate?



    Well, let me begin: Attributes make sense if they describe/add useful information to tag's data.
    Having to escape quotes, just because they are in an attribute sucks I'd say.

    Also, I think XML is very useful if you have to convert from one format to another - as long as you use useful tag names (not like SAP and others do). Like that you can save a lot of time, sometimes you don't even need a spec (because specs tend to be outdated or "unavailable")...
  • (cs) in reply to nationElectric
    nationElectric:
    People hate XML because it isn't a rigorous standard, but they haven't heard of any of the higher-level standards that are implemented *in* XML. People hate XML because the entire stream has be read before they can do anything with it, but they've never heard of SAX.


    Yeah, but if you use SAX, it still have to read all of the file up to the part that interests you, which is why XML isn't suited to a lot of situations where you really need to be able to perform random access to a file.
    And the danger of XML is it may be tempting to use it situations where it seems suited to the job but where it won't actually scale well for large amounts of data for this reason.

    Example: one guy where I work used XML to store call graph profiles. His rationale is that it would be easy to browse through it using any readily available xml browser.
    But when you run the thing for a few seconds and get a huge xml file that is several dozen megabytes large, every readily available xml viewer collapse under the weight when trying to open it.

    These are such situations where I'd rather use a lightweight RDBMs like sqlite as a storage solution.
  • (cs)

    An XML file is really great if....

    1) No human needs to look at it or work it by hand, ever.
    2) The schema is cogently designed for the problem at hand, and is not some over-reaching monstrosity.

    I personally like ANT, but I feel that it is only half done. There should be tools (graphic or otherwise) that automate the creation and manipulation of build.xml files.

  • Willie (unregistered) in reply to CornedBee

    XML is not meant to be human-readable, it is meant to be renderable in any standard text editor. The reason being it is a DOCUMENT format not a DATA format.

    If it was meant as a data-exchange format it would have included provisions for the inclusion of binary data (numbers, compressed data, encrypted data, images, sounds, video, whatever) and all the necessary information for cross-platform compatibility (like byte ordering).

    There's no reason we couldn't have created a standardized data file format, with standard editors, parsers, ... XML just did this well enough to be widely adopted before a sane standard could be created.

  • Hendrik (unregistered) in reply to skicow

    If you liked that picture, a whole of a lot more can be found on http://firefoxy.vegard2.no/
    Cheers, H.

  • Hendrik (unregistered) in reply to Hendrik
    Anonymous:
    If you liked that picture, a whole of a lot more can be found on http://firefoxy.vegard2.no/
    Cheers, H.

    I of course meant to comment on the firefox picture.
  • monkey (unregistered) in reply to Hendrik

    what no-one has said is that if you are transfering data with an external client then using an agreed XML schema means that when the data arrives if it's in the correct format it validates and is usable.

    When you use something else, say an excel spreadsheet then people seem to like moving the columns round or changing the layout in other ways so that your parser has to be re-written every time or you have to reformat the data (very dull).  Do this once a week and you will soon be pleading for XML.

    human readable and nice to read are not the same.

    XML as has been said is a tool misunderstood by managers and coders perhaps.  It is very good for some applications and not for others (like all tools).  and please don't talk about programming in XML or HTML for that matter, it's markup not programming that's why no-one created the <if> tag

    it is sad how many people in this 'debate' don't have a clue about XML but seem to have such strong opinions.  Possibly the same people who have sticky keyboards from wanking over some very dubious looking 'foxy lady'

    </if>Get a life, get a girlfriend and learn what XML realy is :)

  • monkey (unregistered) in reply to monkey
    Anonymous:
    it's markup not programming that's why no-one created the <if> tag
    </if>

    sorry the IF tag
  • deets (unregistered) in reply to Mike

    Yes. I'm a bit late with replying, so others have pointed out troubles with xml as format already. The point is that xml doctuments are humane readable, but to very limited extend writable - and especially not if what you write effectively resembles a piece of code - as my post suggested.

    Lets' start simple: <property name="basedir.web.base" value="${basedir}/../${sgw.project.web.base.subdirectory}/" />

    Anybody else feels that that is bullcrap? Why not

    basedir.web.base=$basedir/../$sgw.project.web.base.subdirectory

    Did you ever try to make something dependend on a precondition, like generating code only if the source is newer than the output? Let's see how ant deals with that:

    <target name="foo" unless="some.var" depends="check.some.condition"> ... </target> <target name="check.some.condition"> <uptodate property="some.var" targetfile="myfile"> <fileset includes="**/*.whatever"/> </uptodate> </target>

    Why not:

    if uptodate("myfile", fileset("**/*.whatever")): ....

    So, what I often enough end up is embedding e.g. jython into ant (choose beanshell or groovy or whatever pleases you) to accomplish the tasks.

    Having said that, I do like the ant-tasks themselves. But the xml syntax? No sir.

  • deets (unregistered) in reply to deets

    Ok... so the forum swallowed my tags... replying to others that embedded xml in their posts did'nt reveal on obvious way to do it, it appeared actually as "normal" tags... So, as I don't have the time to work that out rigth now, I count on your imagination guys...

  • (cs) in reply to monkey
    Anonymous:
    Anonymous:
    it's markup not programming that's why no-one created the <if> tag
    </if>

    sorry the IF tag

    http://www.w3.org/TR/xslt#section-Conditional-Processing-with-xsl:if

  • (cs) in reply to nationElectric

    GAH! That's just uuuuuuuuugly....

    Anyway, the point being, xslt has an if tag. Of course, xsl follows a declarative programming model, so I'm sure lots of people hate it for that reason.

  • monkey (unregistered) in reply to nationElectric
    nationElectric:
    Anonymous:
    Anonymous:
    it's markup not programming that's why no-one created the <if> tag
    </if>

    sorry the IF tag

    http://www.w3.org/TR/xslt#section-Conditional-Processing-with-xsl:if




    is that not XSL
    " This specification defines the syntax and semantics of XSLT, which is a language for transforming XML documents into other XML documents. "

    XML is just  stuctured data

  • (cs) in reply to monkey

    Monkey:

    "When you use something else, say an excel spreadsheet then people seem to like moving the columns round or changing the layout in other ways so that your parser has to be re-written every time or you have to reformat the data (very dull). Do this once a week and you will soon be pleading for XML."

    Amen.

  • (cs) in reply to JohnO
    JohnO:

    You guys would all do the fox if you had the chance, hair, fire, etc. be damned.

    Now that we've determined that XML sucks, why don't we have the element versus attribute debate?



    I vote for attributes. The SAX parser I'm using cuts strings inside elements into pieces (even if they are pretty short); this causes a lot of extra work to fix the pieces.
  • (cs) in reply to Zlodo
    Zlodo:
    nationElectric:
    People hate XML because it isn't a rigorous standard, but they haven't heard of any of the higher-level standards that are implemented *in* XML. People hate XML because the entire stream has be read before they can do anything with it, but they've never heard of SAX.


    Yeah, but if you use SAX, it still have to read all of the file up to the part that interests you, which is why XML isn't suited to a lot of situations where you really need to be able to perform random access to a file.
    And the danger of XML is it may be tempting to use it situations where it seems suited to the job but where it won't actually scale well for large amounts of data for this reason.

    Example: one guy where I work used XML to store call graph profiles. His rationale is that it would be easy to browse through it using any readily available xml browser.
    But when you run the thing for a few seconds and get a huge xml file that is several dozen megabytes large, every readily available xml viewer collapse under the weight when trying to open it.

    These are such situations where I'd rather use a lightweight RDBMs like sqlite as a storage solution.

    In situations like this, xml stores (like http://xml.apache.org/xindice/) are really useful. They're basically like rdbms's for xml: load all your megs of data in, and then just query (via xpath) for whatever you need to work with at a given time.

  • (cs) in reply to nationElectric

    LAME.

  • (cs) in reply to ammoQ
    ammoQ:
    The SAX parser I'm using cuts strings inside elements into pieces (even if they are pretty short); this causes a lot of extra work to fix the pieces.


    Can you use a DOM model instead?  DOM provides a normalize method which merges adjacent text nodes into a single node.
  • (cs) in reply to monkey

    Monkey:

    "is that not XSL... XML is just stuctured data"

    XSL isn't part of the base XML spec, no, but it is a w3c standard that is implemented in XML. It allows you to write XML that applies logic to data, and it is clearly intended by the w3c to extend XML for exactly that function. It strikes me that it fits the bill for "programming in XML."

  • (cs) in reply to nationElectric
    nationElectric:
    In situations like this, xml stores (like http://xml.apache.org/xindice/) are really useful. They're basically like rdbms's for xml: load all your megs of data in, and then just query (via xpath) for whatever you need to work with at a given time.


    Ok. Point taken (by the way the implementation you mention wouldn't be able to handle something as large as I mentioned earlier from what the FAQ says, plus I wonder if there is any implementation that would come anywhere near sqlite + a custom solution on top of it to map trees to a rdbms in terms of footprint and efficiency).

    My main gripe with XML is very simple.
    Let's say you have an app that must save a bunch of objects that are linked to each other.
    If you want to save this to xml, you either have to write code to map this to a DOM, or to output xml directly.
    If you want to load this from xml, you have to write code to reconstruct this object hierarchy from a DOM or from SAX events.

    The code that does this is, as far as I'm concerned, what I would like to get automated.
    So, let's suppose I write a serialization system that somehow or from somewhere grabs enough informations about my different classes to automatically generate the required code to do all this.
    Now, what XML brings to the table to make this easier ? Nothing.
    I could just as well load and save from/to binary files and get better performance and smaller footprint for my data. I could make my serialization system able to work with different storage formats so I could just import/export XML if that is needed for interoperability with stuff that can't directly use this serialization system.

    So basically XML doesn't solve the actual problem I'd like solved, which is to write code to map my runtime objects to a storage format. Arguments such as "how many parsers does one needs to write ?" are utterly meaningless as far as I'm concerned, because the real problem, I think, is to avoid to write tons and tons of code to map the runtime representation to a storage format.
  • (cs) in reply to Zlodo

    Ok, and by the way, why can't we edit our posts ?
    I would have liked to fix the horrible repetitions at the end of my posts, please just imagine that the last paragraph was formulated in a less clumsy and repetitive way thanks :p

    (obligatory: the real WTF is this forum software)

  • dunkthegeek (unregistered) in reply to Hendrik

    Anonymous:
    Anonymous:
    If you liked that picture, a whole of a lot more can be found on http://firefoxy.vegard2.no/
    Cheers, H.

    I of course meant to comment on the firefox picture.


    dude, you forgot to quote the actual image and make me scroll an extra screenful of that b*tch clutching the #1 target or Republicans.

    geez, if you are gonna do something, do it right.


    d

  • A Programmer with a Clue (unregistered) in reply to nationElectric

    With all the ignorance of XML on here, I think there will be a lot more WTF posts coming from their own code in the future...

  • maestro foo (unregistered) in reply to RevMike
    RevMike:
    ammoQ:
    The SAX parser I'm using cuts strings inside elements into pieces (even if they are pretty short); this causes a lot of extra work to fix the pieces.


    Can you use a DOM model instead?  DOM provides a normalize method which merges adjacent text nodes into a single node.


    I've only used the MSXML DOM and it's not possible to have multiple adjacent text nodes - just because there's no clear distinction between them. When you manipulate the DOM in memory, it is possible to insert a text node after another text node, but the distinction is lost when you load it back from a file.
  • (cs) in reply to RevMike
    RevMike:
    ammoQ:
    The SAX parser I'm using cuts strings inside elements into pieces (even if they are pretty short); this causes a lot of extra work to fix the pieces.


    Can you use a DOM model instead?  DOM provides a normalize method which merges adjacent text nodes into a single node.

    Theoretically I could, but that means rewriting the app; and (apart from cutting text nodes into chunks) the SAX parsing modell is more adequate for my application; if I use DOM, the next step after building the DOM tree would be to iterate through it. Maybe I will switch to a pull parser at next major changes.
  • ZIM (unregistered) in reply to Xepol
    Xepol:
    Calling XML a standard is an insult to every standard ever written before during and after the inception of XML.

    A classic! Going into my quotation file! (with attribute)

  • Mattpalmer1086 (unregistered) in reply to Willie

    I think you're talking about ASN1.

    (http://asn1.elibel.tm.fr/en/)

    And it's been around for far longer than XML.   But it made the fatal mistake of not having an "X" in it's name, which is way sexier and exciting.

  • Jabba D. Hutt (unregistered) in reply to johnl

    > In fact, it sounds like XML is the correct way to go in that case,
    > since sending data to other systems is what XML was originally intended for.


    I know you won't believe this, so here, cut-n-pasted from
    http://www.w3.org/XML/
    is the real poop:

    Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML (ISO 8879). Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere.

    Electronic publishing!  Not data!!

    Sorry, but you were the 1,000,000 person to repeat/recreate that misinformation,
    so you won the correction.


  • XML \o/ (unregistered) in reply to Jabba D. Hutt

    Anonymous:
    > In fact, it sounds like XML is the correct way to go in that case,
    > since sending data to other systems is what XML was originally intended for.


    I know you won't believe this, so here, cut-n-pasted from
    http://www.w3.org/XML/
    is the real poop:

    Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML (ISO 8879). Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere.

    Electronic publishing!  Not data!!

    Sorry, but you were the 1,000,000 person to repeat/recreate that misinformation,
    so you won the correction.


    I can't be arsed to look up the context of the quote you posted, but nowhere does it say that you should or should not use XML for 'exchanging data' - it only says that "XML is playing an increasingly important role in the exchange [...] of data". The increasingly important role XML plays in the exchange of data even suggests that it is OK to use XML for the exchange of data. The poo you excremented doesn't have a judgment about using XML for publishing data. It's not all that uncommon for things to be used for things they weren't designed for - just ask the designers of Boeing 767s and the architect of the WTC.

    Apart from that, what's the fokking difference between 'electronic publishing' and 'exchanging data'? Before I can publish something electronically (I guess that means web pages and such), I need to exchange data (send "content", excusez le mot) right? Before you go all like "Electronic publishing!  Not data!!", you might want to define the difference between the two.

    I mean, knives were originally designed to meet the challenges of cutting up meat for eating, but they play an increasedly important role when people run out of bullets.

    In the meantime, while all the XML-bashers bash XML and write their own parsers for their own weird file formats, real programmers see the difference between real programming and parsing weird file formats and just get the jobs done.

  • Adam (unregistered) in reply to BradC

    just pipe the second through
    <font size="2">    sed -e 's/"(.*)"/\1/' -e 's/""/"/g',
    </font>right?

  • Adam (unregistered) in reply to Adam
    Anonymous:
    just pipe the second through
    <font size="2">    sed -e 's/"\(.*\)"/\1/' -e 's/""/"/g',
    </font>right?


    (in reference to the broken csv format 2 pages back)<font style="font-family: courier new;" size="1">

    "NAME, CITY, STATE, AMOUNT, DATE"
    """Johnson   "",""Dallas    "",""TX  "",""  4291.30"",""20051205"""
    """Smith     "",""Seattle   "",""WA  "",""   900.00"",""20051201"""
    """Washington"",""NY        "",""NY  "",""  1200.50"",""20051215"""

    </font>
  • your mom (unregistered) in reply to XML \o/
    Anonymous:

    It's not all that uncommon for things to be used for things they weren't designed for - just ask the designers of Boeing 767s and the architect of the WTC.



    awesome...  so you're saying 9/11 was a Good Thing, just like using XML to transfer data?  that's great!
  • (cs) in reply to procyon112
    procyon112:

    Let me attempt, in a few short minutes to define a decent PML (Procyon's Markup Language) that would kick XML's ass:


    The first 16 bits of the file is either 0, meaning that the tag definition is defined inline, or the length of the URL where the tag definition is to be found.



    Big endian or little endian?  Oops.  So much for "PML".

  • (cs)

    This wouldn't have anything to do with the imfamous TPS report from the move Office Space would it?

  • (cs) in reply to XML \o/

    Anonymous:
    > In fact, it sounds like XML is the correct way to go in that case,
    > since sending data to other systems is what XML was originally intended for.


    I know you won't believe this, so here, cut-n-pasted from
    http://www.w3.org/XML/
    is the real poop:

    Extensible Markup Language (XML) is a simple, very flexible text format derived from SGML (ISO 8879). Originally designed to meet the challenges of large-scale electronic publishing, XML is also playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere.

    Electronic publishing!  Not data!!

    Sorry, but you were the 1,000,000 person to repeat/recreate that misinformation,
    so you won the correction.

    Here we have a nicely done, reasonably civil bit of debate. The original poster posited that "sending data to other systems is what XML was originally intended for"; the response, with a linked quote from the w3, is that, no, it was originally intended for electronic publishing.

    No claims by the responder that XML shouldn't be used for data.

    Anonymous:

    I can't be arsed to look up the context of the quote you posted, but nowhere does it say that you should or should not use XML for 'exchanging data' - it only says that "XML is playing an increasingly important role in the exchange [...] of data". The increasingly important role XML plays in the exchange of data even suggests that it is OK to use XML for the exchange of data. The poo you excremented doesn't have a judgment about using XML for publishing data. It's not all that uncommon for things to be used for things they weren't designed for - just ask the designers of Boeing 767s and the architect of the WTC.

    Apart from that, what's the fokking difference between 'electronic publishing' and 'exchanging data'? Before I can publish something electronically (I guess that means web pages and such), I need to exchange data (send "content", excusez le mot) right? Before you go all like "Electronic publishing!  Not data!!", you might want to define the difference between the two.

    I mean, knives were originally designed to meet the challenges of cutting up meat for eating, but they play an increasedly important role when people run out of bullets.

    In the meantime, while all the XML-bashers bash XML and write their own parsers for their own weird file formats, real programmers see the difference between real programming and parsing weird file formats and just get the jobs done.

    And then we have this. Missed the original point. Can't be bothered to look up terms integral to his argument (e.g., 'electronic publishing'), much less define them, yet berates his opponent for not defining terms. Examples that look like someone's spending too much time playing "Grand Theft Auto". (The "excusez le mot" was a nice touch, though. Very edicated-soundin').

    As for the definitions, I may be a bit off the mark, but basically, electronic publishing covers web pages, ebooks, and other formatted representations of electronic information primarily intended for human consumption; books, magazines, newspapers, corporate reports without all the dead trees. Data, on the other hand, is a much broader category; binary programs are data, but are not usually electronically published. There are lots of things you can do with data besides publishing it. And yes, you do need to exchange data in order to do electronic publishing, but that's not all you can do with data.

    Yes, people are using XML in ways it wasn't originally intended; XML is evolving, and tools are being created, to accomodate those uses. Yes, there are other ways to accomplish the same things without XML; some better, some worse. XML is a tool.

    And thank you, Mr. Real Programmer, for reminding me why I quit hanging around people who drink; too many conversations like this one.

     

  • (cs) in reply to Zlodo
    Zlodo:
    Let's say you have an app that must save a bunch of objects that are linked to each other. ... If you want to load this from xml, you have to write code to reconstruct this object hierarchy from a DOM or from SAX events. ... So, let's suppose I write a serialization system that somehow or from somewhere grabs enough informations about my different classes to automatically generate the required code to do all this. Now, what XML brings to the table to make this easier ? Nothing. ... So basically XML doesn't solve the actual problem I'd like solved, which is to write code to map my runtime objects to a storage format. Arguments such as "how many parsers does one needs to write ?" are utterly meaningless as far as I'm concerned, because the real problem, I think, is to avoid to write tons and tons of code to map the runtime representation to a storage format.


    Let me see if I understand this correctly. You are arguing that the benefit you gain from not having to write parsers is offset by the neccessity to write XML-to-data conversters that comes from using a standard that is too generic for a good immediate fit with your application. That is actually an interesting argument, and carries some validity. Though I have the feeling that using a well-supported "standard" format and thus benefit from the existing toolset would tip the balance somewhat to the side of productivity, then again the argument shows that it is in fact a balancing act, and, as such, a design decision that should be subject to thorough scrutiny beforehand. Given the usage in question, of course, perhaps XML should sometimes be used more as a prototype and reference to provoke insight into what is really,specifically, needed.

    @rbriem: There is the same psychology among geeks as among jocks. They claim to be "real men", geeks claim to be "real programmers". The more mature and/or secure, on the other hand, have no need to make boisterious claims.
  • (cs) in reply to Mikademus

    shhheeeesh XML bitch-fiest!!

    Re the XML to data...these quite a few Object persistance layers out there for most languages, with which you can map to XML, flat files, databases etc. Solves the problem as long as performance isnt critical (and if youre using XML it obviously isnt); no need to write tons of code and adds quite a lot of flexibility in terms of how you can store your objects.

    Now, can anyone direct me to where I can find a bitch-fiest about design patterns?

  • kameldey (unregistered) in reply to FootFetishist
    FootFetishist:
    I personally think XML is for idiots BECAUSE of this "total human readability" thing.ht's a brain-virus from those wannabe-coders called "web-developers".I thing IF you want a portable flexible format simply use EBML and create a reference explaining the tags in it. Then you got a 100% portable extensibe AND small interface. You even could make it human-readabe if you took a converter to xml or an editor who both would use a file associating the tag-ids to strings like in the reference.Actually i'm developing a client-server multi-user life-management-system using this ideas right now.
  • hoodaticus (unregistered) in reply to b1xml2
    b1xml2:
    You have to understand that those anti-xml nazi are oftentimes lacking in more than rudimentary skills in programming using Xml. In fact, I would venture to say that those who rant against Xml are also mediocre programmers because if you were good, you'd know the worth and limits of xml and use xml where it is judicious, like anything in the programming world.Xml caught the imagination of software architects and middle to senior developers for what it can do to overcome today's problems. Average and below-average programmmers whose radar screen did not include how Xml could solve problems encountered in the Enterprise were for the most part unconvinced of its merits, and for the other part, felt threatened because this became yet again another technology that they had to learn or lose out.The moment anyone opens his mouth and says Xml should be consigned to oblivion without providing at the very least some architectural reasons for it is a very scared and angry programmer whose skills do not include a proper depth, appreciation and understanding of the prevailing problem domains in the Enterprise space.In short, they are plain dumb.

    Amen. Ever hear of XML+XSLT>>XSL:FO for publisher-grade report rendering with mutliple regions and flows?

Leave a comment on “The Flat-File Society Does XML”

Log In or post as a guest

Replying to comment #:

« Return to Article