• P (unregistered)

    The WTF itself is massive and great, but if you're putting the punchline at relational database cursor misuse then the punchline is kinda lame. Lots of people have done that.

    Frankly, the first part about IniDOC crashing computers badly is much more interesting.

  • RLB (unregistered)

    All very well, Remy, but as for your HTML comment: that's nothing to do with there being or not being documents. Your framers are cowboys, plain and simple.

  • (nodebb)

    Dynamic column creation? Images and other documents stashed in BLOBs? MS SQL Server in the mid-2000s?

    What could go wrong? This really is a WTF.

    I've made some of these mistakes myself in prototype code. But there were plenty of books and other resources back then to help devs avoid this kind of stuff.

    Repeat after me. An RDBMS is not a file system.

  • Anon (unregistered)

    The real WTF is trying to copy Visual Source Safe, and when that fails trying to copy Team Foundation Server.

    Did this product really exists or was made up to protect the guilty?

  • MiserableOldGit (unregistered)
    Nothing is worse than having a building's electrician working from one drawing as they plan their wiring, and having the framers working from another drawing, and putting their walls in different places than the engineer expects.

    Coming from that industry, even in the days before CAD, that is not even mild, it's SOP.

    Finding out when roof trusses are delivered that load bearing walls have been moved and they can't span is much more fun, especially when some bright spark then asks if they are still aligned with the piled foundations ....

    I'm not sure the SQL inner platform thing is a terrible solution (although it's obviously badly implemented), from the description it sounds like it dates from a time when CMS/DMS were still fairly closed and this industry requires such a huge number of interdependent attributes you end up with that or very sparse tables. However I don't get why anyone would be exposed to VSS and be inspired to copy/incorporate it! What you can't do is have all the table-building and version checking happening as users navigate the folder tree, better to have the system chunter away in the background rebuilding things on update (while maintaining read only locks on the dependencies) and let people know by notification when an update is ready.

    Sounds like the developer built something that worked fine and dandy when he demoed it with 5 or 6 blueprints of his garden shed and the idiots never even considered scalability.

  • Kattman (unregistered) in reply to OllieJones

    I agree... I'm a big proponent of RDBMS but only when it is the right tool for the job. in this case the should not have gone with SQL Server but rather one of those incorrectly named "no-sql" databases. I say incorrectly named because they do have some structured query language to get to the data, but the are a non-relational system, it'a s document database, this is what they were designed for. Use the right tool for the job, and RDBMS is not it in this case.

  • WhatEver (unregistered) in reply to Anon

    I used to to support a product that didn't just copy VSS, instead it actually used the genuine VSS product as its underlying file system. The application itself was a layer over the top of VSS that also acted as a multi-user and network adaptation layer and the UI was web based with a bunch of VBScript running behind the scenes (and although this sounds a lot like today's WTF, I can assure you it's not my story).

    BTW In before the maligning of VSS. For single user/single machine/never across a network it worked pretty well and I never had a problem with it. But once you got out of that arena you were taking your own life into your hands.

  • TruePony (unregistered)

    TRWTF is not using a single entity-attribute-value table to store everything. Then they would have only 1/3 as many tables, so they would get 3 times the performance.

  • (nodebb)

    Damn you Remy! Now I've got Morgan Freeman in my head singing "Easy Reader, that's my name, I say -- uh, uh uh"

  • Bruce W (unregistered)

    Certainly, it wasn't the "gold master" or "release candidate" or what the customer would actually get. Standard Operating Procedure for technical sales.

  • (nodebb)

    "Because metadata could be anything- numeric, text, even a thumbnail image- the only workable datatype was to store everything in BLOB columns."

    Apparently metadata isn't what I thought it was.

  • DT (unregistered) in reply to jinpa

    Metadata is data about the data. So, I'm not sure what you thought it was, but yes it should be essentially anything.

  • Engineering Change Notice (unregistered)


    Do: version control strongly typed data objects. Do not: version control document files

    Do: create version controlled functions which generate document files Do not: create documents ad-hoc from version controlled data objects

    Do: create version controlled assemblies from version controlled data objects Do not: Use data object parameters in assemblies

    Do: create version controlled state machines incorporating production factors, as in how we are actually going to make this, such as shop routing, supply lead time, process tolerances, material cost analysis, shipping availability, certified material, packaging along with application use. Do not: Design solely by copying from a reference.

    You are working in an environment where on site safety requirements prevent us from labeling over a label. Customer and labor agreements mandate every step to be documented and followed. A brief "ok" in an email can be used to authorize the purchase of materials costing hundreds of thousands of dollars made by staff who can be fired by simply dropping a fastener.

    Standards and the sharing of technology is encouraged. Plenty of inventory stock exists. Capacity, as in trust in knowing you can handle what we give you, is in short supply. Little selling is required.

    When in doubt please send inquiry to shipping dock.

  • MiserableOldGit (unregistered) in reply to Kattman

    I think the story predates the modern proliferation of those, particularly the document-oriented ones. Although yes, something like UniData might have done a better job. Trouble is the main "data files" they are dealing with are probably binary DXFs, and they are a pain.

    They most certainly should not have been trying to store tonnes of binary data in their data store, stick that out in a file store and reference it. As long as you keep users out of the file store it's not that hard to sort out, and much more resilient. I do suspect it might have been beyond that dev team, though.

  • sizer99 (google)

    I feel this story is true whether it's talking about AutoCAD, Pro/ENGINEER, Solidworks, Intergraph, Unigraphics/NX, whatever. They're all tottering towers of bug-infested kludginess.

  • MiserableOldGit (unregistered) in reply to sizer99

    "They all worked beautifully, back in the good old days of DOS/SparcStations/VAX"

    I don't remember this particular festering mess occurring in the ones I dealt with most (AutoCAD, SolidWorks, PDS), but I moved away from being directly involved with CAD 20 years ago. If I had to make a wild stab-in-the-dark guess I'd go with the Bentley Systems stuff.

  • Some Ed (unregistered)

    This sounded horribly like somebody had made a database table with no keys. And then they followed that with the worst kludge to provide keys to that database table.

    I've had the misfortune to work with such a system. It was probably the one where I had the epiphany that the "class" this was "best" in was "software somebody wanted to offload lock stock and barrel as fast as possible."

    Since performance was a definite concern of ours, and their proprietary code was all written in scripting languages I could just go in and change, I edited their "make a temporary table" script to make a permanent table, ran it, and then modified the rest of the code to use that table. That not only made the performance about a thousand times faster (so merely slow, rather than 45 minute break after every tweak), it also fixed half a dozen other intermittent issues we were seeing.

    I shared this solution with the vendor.

    In the next update, their temporary table script indexed some of the fields to try to improve performance on the one search that would be made with the table before tossing it.

    I knew then the project wasn't going to be successful.

  • Some Ed (unregistered) in reply to Some Ed

    Just to clarify: on the crapware I talked about in 513134, the temporary table was being created without any filtering whatsoever, the exact same way for every operation that needed it. If there had been anything actually dynamic about it, my solution wouldn't have worked without more effort.

    The vendor excused their not using my suggestion by stating that they needed to remain compatible with their existing customers. Of the 10 existing customers they had told us about, when we had asked for examples of companies using this stuff, 9 had just bought a site license without proper software review, like we had, and then ditched it when they found it couldn't be made to do what they'd been told it could do, like we did. The remaining company managed to get the product to sort of work for a very niche part of the problem they'd been trying to solve with the program. The key feature of this niche that made that work was it wasn't important enough to be a problem if stuff broke for a week or two at a time.

  • Mr Bits (unregistered) in reply to OllieJones

    (cough) SharePoint (cough)

  • Mike Swaim (google)

    I've done something like this. Back in the 90's, I worked on a document management system using SQL Server as the back end. (The actual documents were stored on a FileNet jukebox.) Filenet supported a limited number of attributes per document class, and if you screwed anything up, that was a service call. They're using a variant of an EAV schema, which is reasonable in this situation. Attributes change wildly per document class, and you're not going to see many reused between a MSD sheet and a unit diagram. Even between CAD drawings, you could easily have different attributes, depending on what it was a drawing of. The first WTF that jumps out to me is that they're building a temp table with every attribute as a separate column. Your system might have several hundred attributes for all documents, but any give class would have less than 10. The stored procedure creates a couple of hundred columns that'll always be blank, and returns them to the client, which then has to figure out which ones it doesn't care about and discard them. That's a lot of extra work that you do over and over again. If you're going to do that, you might as well just stick everything in a huge master table, and abandon EAV. It'll be a lot less work in the long run.

  • (nodebb)

    You can do it the Oracle way: every important object table has a set of ATTRIBUTEnnn columns, a smaller set of ATTRIBUTE_NUMBERnn columns, and possibly some ATTRIBUTE_DATEnn columns probably depending on how many people have complained about not having date attributes, and an ATTRIBUTE_CONTEXT column that is used to specify how all the attribute columns are interpreted.

    If you're really lucky you can also get a set of GLOBAL_ATTRIBUTE columns on the same table, like here

    In some tables I've seen the attribute columns go up to ATTRIBUTE150 and ATTRIBUTE_NUMBER50. I can't fathom the madness that requires this many potential attributes in a single context.

  • Yazeran (unregistered) in reply to MiserableOldGit
    <quote> They most certainly should not have been trying to store tonnes of binary data in their data store, stick that out in a file store and reference it. As long as you keep users out of the file store it's not that hard to sort out, and much more resilient. I do suspect it might have been beyond that dev team, though. </quote>

    Yep, Although I am in no way better in programming than most and have made my fair share of WTF's even I knew to do this when I made our first document storage system some 10 years ago (now a days used mainly for risk assessments).

    Make SQL tables for all meta data (such as title, author, modification date, abstract, keywords etc.) but sore the actual file in a file store and include a table column with the filename in that store and make damn sure users can in no way modify that value, then you have both the search performance and and reasonable scaling.


    Plan: To go to Mars one day with a hammer.

  • X (unregistered)

    Right now my personal WTF is not having access to the git repository for more than a week :/ Code Reviews will be fun, once that is fixed.

  • RLB (unregistered) in reply to OllieJones

    Repeat after me. An RDBMS is not a file system.

    You say that, but I've worked with an Informix database that really did emulate a (limited, and specialised) file system.

  • (nodebb) in reply to sizer99

    I feel this story is true whether it's talking about AutoCAD, Pro/ENGINEER, Solidworks, Intergraph, Unigraphics/NX, whatever. They're all tottering towers of bug-infested kludginess.

    I worked at a company that sold "project collaboration clouds" to AutoCAD customers because there is entire ecosystem of atrocious software packages to sync changes between companies that are collaborating on projects. The best fix was to uninstall all of that crap and have one copy of the document and a few hundred remote desktops. Yes, having CAD people remote into four-monitor workstations was a better solution than using these garbage heaps.

  • Shut the fuck up (unregistered) in reply to P

    Shut the fuck up

  • 🤷 (unregistered)

    Developers not understanding SQL, or how large amount of data can be a bottleneck, are all too common. I once worked with an application that would first store millions of rows in a temp table (not an actual temp table, but one dropped and created at the runtime of the app) and then deleted all the rows that didn't fit the search criteria. Why? I have no idea. A simple "WHERE" made the app run much faster.

  • Mike S (unregistered) in reply to RLB

    With a little work, SQL Server can use the file system to store BLOBS. You can even get a file handle to the underlying object, if you like.

  • (nodebb) in reply to Mike S

    Yes it can. Sometimes it makes things a lot better, sometimes it makes things a lot worse.

    We use two document repositories at my current workplace. I regularly swear at SharePoint for storing documents in the database, and I regularly swear at OnBase for not storing them in the database. Of course both for different reasons. There is grass on both sides -- none of it is green.

Leave a comment on “The Document Cursor”

Log In or post as a guest

Replying to comment #:

« Return to Article