• xdfsdfg (unregistered) in reply to boog
    boog:
    airdrik:
    TRWTF is "Unit" tests which require specific data in the database. Are they testing inserting and deleting from the database? If not they should be spoofing/mocking the data access to provide the data that they need for the unit test. If they really need specific data in the data base then it is no longer a unit test because it isn't testing a single unit.
    Excellent point; just because a test is automated does not make it a unit test. It's surprising how many people seem to confuse the two terms when it comes to testing.

    Due to this confusion I've actually worked with developers who argued against the use of unit tests in complex systems which they believed were too complex for effective unit testing. I just couldn't convince these people that unit tests are only supposed to test a single module (or unit) at a time.

    That's why I test everything as strings:

    var foo = new MyBusinessObect();

    Assert.AreEqual("this is a test", foo.ToString());

    Can't get smaller than a string.

  • chilltemp (unregistered) in reply to Fred

    With good partitioning and truncates, you can dereference the exiting data in large chunks. Of course, that takes proper planning and a database that supports these concepts.

    But you are right, truncating involves some writes and deleting data on a per row basis involves many writes. As our resident database guru frequently says (partially quoting Tom Kyte): Row by row equals slow by slow.

  • Franz Kafka (unregistered) in reply to Fred
    Fred:
    Sutherlands:
    One of the tables my team uses sees enough writes to keep 4 app machines maxxed out on disk I/O (reading MSMQ) nearly all day. I suppose you'll tell me we should not delete that data, either?
    Have you come up with a way to delete it that does not cause additional writes?

    The usual way is to partition on date and then drop old partitions. Of course, if you're using the table as a message queue (assumed from the MSMQ ref), then you're being disingenuous - MQs are not persistent data.

  • ih8u (unregistered) in reply to Actually read the comments
    Actually read the comments:
    Did anyone commenting about "you should never REALLY delete rows, blah, blah blah" actually read the comments for the trigger posted with TFA? Did you not see the mention of the "PRODUCTS_AUDIT" table that almost certainly stored all the records that were deleted from the main PRODUCTS table?

    TRWTF is that you expected people to RTFA. Ooh, it's about databases -- better get my genius thoughts posted.

    captcha: jumentum .... I'm not touching that one.

  • (cs)

    I see it. The real wtf is that he misspelled "brillant" right?

  • Ben (unregistered) in reply to Someone who can't be bothered to login from work
    Someone who can't be bothered to login from work:
    boog:
    Why is the VP so involved in the low-level implementation details here? So he/she insists that the trigger prevents deletes as it used to. That seems easy enough, and you don't even have to reintroduce a bug to do it. So why is the VP making mandates on how the trigger enforces this rule?

    Are we sure this wasn't a case of some crybaby original developer/admin who didn't like his poorly-written trigger being corrected, so he went and tattled to the VP of security?

    My guess would be a small company with people who had delusions of grandeur and rejoiced under titles like "VP" when they were, in fact, probably only one or two steps removed from the low-level work.

    You have to be liberal with the title VP in a small company because only officers can sign certain legal documents.

  • Anonymously Yours (unregistered) in reply to Zach Bora
    Zach Bora:
    Wheres my dailywtf? I feel like I`ve been stolen.
    The WTF is a two-parter. First, that an absurd trigger was put in to stop delete commands, to protect the data from a "rogue process" performing deletes instead of appropriately limiting that power using table permissions. Second, to resolve this problem, the author wrote a rogue process to silently bypass the trigger, which could inevitably lead to validating the VP's concerns some process could delete records in a table that it should be impossible to delete records from.
  • (cs) in reply to Anonymously Yours
    Anonymously Yours:
    The WTF is a two-parter. First, that an absurd trigger was put in to stop delete commands, to protect the data from a "rogue process" performing deletes instead of appropriately limiting that power using table permissions.
    Not quite. The trigger sounds like an auditing trigger, and nothing more. Preventing deletes appears to actually be a side effect due to a bug, where they were using :new.product_id instead of :old.product_id. On a delete, :new would be NULL (since it's deleting the record), so when the trigger creates the record in the audit table, it fails because the primary key can't contain NULL.

    After someone fixed it, the VP insisted that they un-fix it in the name of data security. It's at that point they should have introduced table permissions. The problem isn't that they used a trigger to enforce table permissions; it's that they knowingly reintroduced a bug to enforce table permissions.

    Anonymously Yours:
    Second, to resolve this problem, the author wrote a rogue process to silently bypass the trigger, which could inevitably lead to validating the VP's concerns some process could delete records in a table that it should be impossible to delete records from.
    Keep in mind though that the unit tests (or "rogue process" as you say) are probably running in a test environment.
  • (cs) in reply to Franz Kafka
    Franz Kafka:
    Fred:
    Sutherlands:
    One of the tables my team uses sees enough writes to keep 4 app machines maxxed out on disk I/O (reading MSMQ) nearly all day. I suppose you'll tell me we should not delete that data, either?
    Have you come up with a way to delete it that does not cause additional writes?

    The usual way is to partition on date and then drop old partitions. Of course, if you're using the table as a message queue (assumed from the MSMQ ref), then you're being disingenuous - MQs are not persistent data.

    The queues are on a separate cluster, the processing of which is simply to insert it into the DB. (This particular application is full of fail, btw.) But yeah, the way they handle it is the partitioning. Also, the application cluster is what is maxed out on disk I/O, not the DB.

  • (cs)

    TRWTF is using the product id as the primary key of the audit table. You might not need a primary key for the audit table, if it is infrequently read, but if you do it has to have its own synthetic key.

  • (cs) in reply to Alistair Wall
    Alistair Wall:
    TRWTF is using the product id as the primary key of the audit table. You might not need a primary key for the audit table, if it is infrequently read, but if you do it has to have its own synthetic key.
    Who said the product ID was the only field in the primary key? I'd be surprised if it didn't also have a timestamp, change type (insert/update/delete), and username.

    Even if it's infrequently read, considering how big audit tables can get I can't imagine an index being unwelcome.

  • An Old Hacker (unregistered) in reply to Larry
    Larry:
    Sutherlands:
    One of the tables my team uses sees enough writes to keep 4 app machines maxxed out on disk I/O (reading MSMQ) nearly all day. I suppose you'll tell me we should not delete that data, either?
    Well, since you're recording the GPS coordinates of every cell phone in the country once every 10 seconds, yes I wish you would delete it.

    You would be amazed at how easy it is to accumulate real business data at this sort of rate. I worked for a startup that was in the TB/hr range. THAT was a fun table to archive twice a day...

  • (cs)

    I think this VP worked for my company at one point. Here is a trigger from an old version of one of my production databases:

    ALTER TRIGGER [dbo].[secRoleTr] on [dbo].[SecRole] FOR INSERT, UPDATE, DELETE AS Rollback transaction -- commit transaction -- No changes allowed

    I wonder how the roles were initially loaded into the system? I also wonder why permission weren't good enough to do the job? After all, anyone that can alter or bypass permissions can also disable the trigger.

  • (cs) in reply to Someone who can't be bothered to login from work
    Someone who can't be bothered to login from work:
    I belong to a group which once changed the members' forum so you couldn't mention the name of one of an ex-member.
    I think I know the group. Was this member known by a 4-letter first name followed by the initial 'T'?
  • AA (unregistered) in reply to airdrik
    airdrik:
    TRWTF is "Unit" tests which require specific data in the database. Are they testing inserting and deleting from the database? If not they should be spoofing/mocking the data access to provide the data that they need for the unit test. If they really need specific data in the data base then it is no longer a unit test because it isn't testing a single unit.

    I think it's reasonable to assume that this is part of some unit tests on their database system, yes. Stick test data in the database, test that your stored procs work as their supposed to, remove the test data when you're done.

  • ted (unregistered) in reply to michael
    michael:
    boog:
    Why is the VP so involved in the low-level implementation details here?
    I used to work for one of the last "real" investment banks and there my colleague who was at the end of the IT foodchain was also a VP.it's only a title and doesnt mean sh*t in some companies.

    This is a thing with banks. I've worked with a bank (traditional not investment) that did the same. Everyone in this one building was a VP. Heh.

  • (cs) in reply to Fred
    Fred:
    Gruntled Postal Worker:
    usually if you don't need records any more, you mark them 'disabled' rather than attempting to change history.
    This needs to be tattooed on the ass of every mindless zombie who knee-jerk deletes anything they don't want to see any more.

    Databases collect facts. If they were true when you collected them, they will be true forever. (On April 12 the price of rice was 1.23.)

    Yes, but see also Ben's point:

    Ben:
    One small company wanted me to design a schema that would (among other things) store invoices, so my first proposal had the invoice storing a copy of each product's basic info at the time of purchase. They could not comprehend that if the invoice simply stored a product ID, the price might change and invalidate all the invoice records. ...

    Some of the tables on one product I work on have a Disabled column, but for others we have to copy on use to ensure that an update doesn't have unwanted effects, and so the fact is recorded elsewhere. In those tables it makes more sense to delete rows than mark them as disabled.

  • Anonymously Yours (unregistered) in reply to boog

    [quote user="boog"][quote user="Anonymously Yours"]The WTF is a two-parter. First, that an absurd trigger was put in to stop delete commands, to protect the data from a "rogue process" performing deletes instead of appropriately limiting that power using table permissions.[/quote] Not quite. The trigger sounds like an auditing trigger, and nothing more. Preventing deletes appears to actually be a side effect due to a bug, where they were using :new.product_id instead of :old.product_id. On a delete, :new would be NULL (since it's deleting the record), so when the trigger creates the record in the audit table, it fails because the primary key can't contain NULL.

    After someone fixed it, the VP insisted that they un-fix it in the name of data security. It's at that point they should have introduced table permissions. The problem isn't that they used a trigger to enforce table permissions; it's that they knowingly reintroduced a bug to enforce table permissions.[/quote]An intentionally broken piece of code was put into a trigger to stop delete commands. The fact that we don't know if that was the initial purpose is irrelevant.

    [quote user="Anonymously Yours"]Second, to resolve this problem, the author wrote a rogue process to silently bypass the trigger, which could inevitably lead to validating the VP's concerns some process could delete records in a table that it should be impossible to delete records from.[/quote]Keep in mind though that the unit tests (or "rogue process" as you say) are probably running in a test environment.[/quote]If this trigger only existed in a test environment the VP of security wouldn't have stepped in to complain about "rogue processes" (his words, not mine, from the article). What happens when a clone of production has data loss? You restore it with a copy of production. The main purpose of a test environment is if something explodes it does so with data you can afford to lose. Protecting the test environment but not production is very unlikely.

    Now, ultimately we do not know if the trigger override code made it into production or not. Assuming it did not then the author has still created a WTF to counter another WTF. Jody has made a test unit that does not accurately reflect what is happening in production, which could lead to code/procs/etc. being promoted that shouldn't be, given that the real behavior of said work cannot be observed until it reaches production.

  • Anonymously Yours (unregistered) in reply to boog

    (Please delete the previous reply; I am reposting to fix a formatting error.)

    boog:
    Anonymously Yours:
    The WTF is a two-parter. First, that an absurd trigger was put in to stop delete commands, to protect the data from a "rogue process" performing deletes instead of appropriately limiting that power using table permissions.
    Not quite. The trigger sounds like an auditing trigger, and nothing more. Preventing deletes appears to actually be a side effect due to a bug, where they were using :new.product_id instead of :old.product_id. On a delete, :new would be NULL (since it's deleting the record), so when the trigger creates the record in the audit table, it fails because the primary key can't contain NULL.

    After someone fixed it, the VP insisted that they un-fix it in the name of data security. It's at that point they should have introduced table permissions. The problem isn't that they used a trigger to enforce table permissions; it's that they knowingly reintroduced a bug to enforce table permissions.

    An intentionally broken piece of code was put into a trigger to stop delete commands. The fact that we don't know if that was the initial purpose is irrelevant.

    boog:
    Anonymously Yours:
    Second, to resolve this problem, the author wrote a rogue process to silently bypass the trigger, which could inevitably lead to validating the VP's concerns some process could delete records in a table that it should be impossible to delete records from.
    Keep in mind though that the unit tests (or "rogue process" as you say) are probably running in a test environment.
    If this trigger only existed in a test environment the VP of security wouldn't have stepped in to complain about "rogue processes" (his words, not mine, from the article). What happens when a clone of production has data loss? You restore it with a copy of production. The main purpose of a test environment is if something explodes it does so with data you can afford to lose. Protecting the test environment but not production is very unlikely.

    Now, ultimately we do not know if the trigger override code made it into production or not. Assuming it did not then the author has still created a WTF to counter another WTF. Jody has made a test unit that does not accurately reflect what is happening in production, which could lead to code/procs/etc. being promoted that shouldn't be, given that the real behavior of said work cannot be observed until it reaches production.

  • PinkyAndTheBrainFan187 (unregistered) in reply to Ben
    Ben:
    One small company wanted me to design a schema that would (among other things) store invoices, so my first proposal had the invoice storing a copy of each product's basic info at the time of purchase. They could not comprehend that if the invoice simply stored a product ID, the price might change and invalidate all the invoice records. I offered a more normalized solution with a history of products, but they claimed I was introducing redundant tables. I also offered a no update policy: create a new product ID any time anything changed, but they weren't happy with that.

    I think I've since figured out why they resisted it. Storing historical data is inherently complicated, and that complexity is clearly portrayed by the schema. The schema, of course, represents the actual functioning of the business. When managers see a complex schema and can't understand it, it causes cognitive dissonance: they must know how the business operates, so how could it be that they don't understand a formal representation of that?

    I thank my lucky stars that I've never encountered this level of stupidity. Why take on an expert and ignore what they have to say? If they really knew better, why weren't they doing it themselves? Of course, the reason they paid you is so that they can ignore you and then blame you later when someone brings a product that has increased in price back for a refund...

  • Old Smith (unregistered)

    TRWTF is the name of the table is in plural

  • Winkly (unregistered) in reply to ER
    ER:
    shouldn't the unit test fail?

    No. If the test passes, the code is fine. End of story. At least this is the view propogated amongst managers where I am.

  • z (unregistered)

    What about ... not deleting row after test?

    After all, your test DB gets reset to consistent state after each run ... right? RIGHT?

  • (cs)
    To get my unit tests working I just added an ALTER TRIGGER PRODUCTS_TRIGGER DISABLE to the test script, with a corresponding ALTER TRIGGER PRODUCTS_TRIGGER ENABLE after the delete was successful.

    How about using BEGIN TRANSACTION and ROLLBACK instead of deleting?

  • Appelflap (unregistered) in reply to Bill C.
    Bill C.:
    "Or, you know, archive the obsolete data in a separate table."

    Then you have to write a completely separate set of queries for anything about them, foreign keys to it are merely convention, and you have to write a bunch of code to move it over.

    The data probably isn't hurting anything where it is, and if you're running out of space, moving to another table will only help if it's in another tablespace.

    Yes, just dynamically replacing the tablename in your code is ofcourse a big no-no. Using archive tables is quite a well-known way of keeping the performance of your database on par while it grows, also foreign keys do influence the optimizer a great bunch, so they are not merely convention, not even on archive tables, and a bunch of code to do a SELECT...INTO is crappy programming.

    Captcha: IMAKNOBQUOTINGCAPTCHAS.... hum.

  • Level 2 (unregistered) in reply to mea37
    mea37:
    The use of a trigger to prevent deletes at all may be a WTF; depends on the permission system for the database, but generally if you don't want to grant delete access to a table you don't have to.
    What do you mean, grant delete access. Doesn't everyone run his code as the schema owner? No need for stupid grants.
  • SmittyBoy (unregistered)

    ... that TRWTF is buried in this comment

    "This trigger was preventing deletes because it's trying to insert NULL into the primary key of the PRODUCTS_AUDIT table on DELETES...".

    It is clear from this that no significant testing of the entire process occured, or the trigger would have always caused a primary key error on a delete.

    Regardless of your position on deleteing rows, this indicates that testing was - as usual - inadequate.

  • JR (unregistered) in reply to xdfsdfg
    xdfsdfg:
    boog:
    airdrik:
    TRWTF is "Unit" tests which require specific data in the database. Are they testing inserting and deleting from the database? If not they should be spoofing/mocking the data access to provide the data that they need for the unit test. If they really need specific data in the data base then it is no longer a unit test because it isn't testing a single unit.
    Excellent point; just because a test is automated does not make it a unit test. It's surprising how many people seem to confuse the two terms when it comes to testing.

    Due to this confusion I've actually worked with developers who argued against the use of unit tests in complex systems which they believed were too complex for effective unit testing. I just couldn't convince these people that unit tests are only supposed to test a single module (or unit) at a time.

    That's why I test everything as strings:

    var foo = new MyBusinessObect();

    Assert.AreEqual("this is a test", foo.ToString());

    Can't get smaller than a string.

    I really hope you guys are all kidding... The classes (entity services) that retrieve data from the database, are also units, and hence could & should be unit tested. We don't use a "mock database" for this, but a test database, set up using the same script as the deployment one.

    We once had a WTFcoder that did tests on the toString method aswell, we're still cleaning up his mess.

    Captha: erat - how appropriate!

  • Jacob (unregistered) in reply to Fred
    Fred:
    Gruntled Postal Worker:
    usually if you don't need records any more, you mark them 'disabled' rather than attempting to change history.
    This needs to be tattooed on the ass of every mindless zombie who knee-jerk deletes anything they don't want to see any more.

    Databases collect facts. If they were true when you collected them, they will be true forever. (On April 12 the price of rice was 1.23.)

    If they were false when you collected them, then you might want to fix the error, but deleting them is rarely the right solution.

    And for those who are habitually "purging" data to save space, try to recall that the price of disk is around ten cents a gigabyte and falling. In other words, in the time it takes you to think "I wanna delete some stuff" you've already eaten up more of your employer's money in salary than they will allegedly save by reusing the disk space.

    If you're deleting for performance... try an index. They're amazing! Or get a real computer with a real database.

    I'm thinking one could almost make the case that data should be written to write-once-never-modify media.

    It sounds like you might have a hoarding problem. You need to learn to let go.

  • Jimmy Jones (unregistered) in reply to Ben
    Ben:
    No one could explain to the VP that if something is sensitive they could move it to a readonly archive rather than delete it? And that you can have permissions in a DBMS to prevent deletions rather than a buggy trigger?

    Ummm ... yes, that's the WTF.

  • (cs) in reply to Fred
    Fred:
    Gruntled Postal Worker:
    usually if you don't need records any more, you mark them 'disabled' rather than attempting to change history.
    This needs to be tattooed on the ass of every mindless zombie who knee-jerk deletes anything they don't want to see any more.

    Databases collect facts. If they were true when you collected them, they will be true forever. (On April 12 the price of rice was 1.23.)

    If they were false when you collected them, then you might want to fix the error, but deleting them is rarely the right solution.

    And for those who are habitually "purging" data to save space, try to recall that the price of disk is around ten cents a gigabyte and falling. In other words, in the time it takes you to think "I wanna delete some stuff" you've already eaten up more of your employer's money in salary than they will allegedly save by reusing the disk space.

    If you're deleting for performance... try an index. They're amazing! Or get a real computer with a real database.

    I'm thinking one could almost make the case that data should be written to write-once-never-modify media.

    disk space needs to be provisioned. disk space alerts need to be investigated. Try doing both of those for the same cost as the time it takes to think "I wanna delete some stuff".

    You are ultimately right in that hard deleting things is usually a bad idea - however the idea that the best solution is to keep everything is worrying too.

    Archive old data and remove it from your 'live' sources.

  • Someone Awful (unregistered) in reply to F
    F:
    CyberShadow:
    TRWTF is using a live database for unit testing. What if a bug in the test caused it to actually delete all rows from the table?
    Just restore from backup. After all, there's bound to be one ... somewhere ...
    Or...y'know...use transactions?
  • TimG (unregistered) in reply to PinkyAndTheBrainFan187
    PinkyAndTheBrainFan187:
    Why take on an expert and ignore what they have to say? If they really knew better, why weren't they doing it themselves? Of course, the reason they paid you is so that they can ignore you and then blame you later when <snip>
    I ask that question with some frequency. In space there is no one to hear you scream.
  • joe (unregistered)

    TRWTF is that she ran her tests against a live database...

  • Sarge2009 (unregistered) in reply to Someone who can't be bothered to login from work

    It has been my experience that this is the sort of justification used in larger companies, not in smaller ones.

  • DC (unregistered) in reply to Bill C.
    Bill C.:
    "Or, you know, archive the obsolete data in a separate table."

    Then you have to write a completely separate set of queries for anything about them, foreign keys to it are merely convention, and you have to write a bunch of code to move it over.

    The data probably isn't hurting anything where it is, and if you're running out of space, moving to another table will only help if it's in another tablespace.

    Some databases have a better way to handle this sort of thing, you can have the table automatically split across two drives or servers.

    Typical for transaction data, last years stuff get pushed off to the other drive.

    So if you query current data, it run quickly off the main drive, but if you need as well old data, then it uses both. From a user point of view, it acts like a single table.

  • (cs) in reply to Anonymously Yours
    Anonymously Yours:
    An intentionally broken piece of code was put into a trigger to stop delete commands. The fact that we don't know if that was the initial purpose is irrelevant.
    How is backstory irrelevant? The initial purpose of the trigger appears to be for auditing. The bug is a pretty common bug (swapping :new and :old), and was likely unintentional. And the fact that a bug was fixed, then purposely reintroduced makes the WTF even funnier/stupider.
    Anonymously Yours:
    If this trigger only existed in a test environment the VP of security wouldn't have stepped in to complain about "rogue processes" (his words, not mine, from the article).
    I didn't say the trigger only existed in a test environment, I said that the unit tests were probably running in a test environment.
    Anonymously Yours:
    Now, ultimately we do not know if the trigger override code made it into production or not. Assuming it did not then the author has still created a WTF to counter another WTF. Jody has made a test unit that does not accurately reflect what is happening in production, which could lead to code/procs/etc. being promoted that shouldn't be, given that the real behavior of said work cannot be observed until it reaches production.
    I partially agree with you on this. First of all, I do think it's ludicrous that you're suggesting that Jody's unit testing code might make it into production, but if you are right then it's a huge WTF regardless of overriding the trigger.

    Second, without knowing more about Jody's test script, it's hard to say whether altering the configuration while unit testing is a WTF. For all we know, all of the actual tests are complete before Jody disables the trigger, deletes the test data, and re-enables the trigger. In that case, the tests themselves may still accurately reflect production.

  • sino (unregistered) in reply to Someone who can't be bothered to login from work
    Someone who can't be bothered to login from work:
    Chubber:
    I thought that TRWTF was redacting the names in the comments. "<name redacted>" used to work for a client of mine in the past. Due to some unspecified, but obviously very serious, transgressions, we actually had to go over all of the source code in our projects and remove any mention of his name, initials, etc. from the comments. Then the spec documents, etc. It was very Orwellian.

    CAPTCHA: secundum - That lacky who always seconds a motion.

    I belong to a group which once changed the members' forum so you couldn't mention the name of one of an ex-member.

    Was it Spectateswamp? Ol' swampy? TopCod3r? T0pC0der? boog, boog(unregistered), boog(unregistered) and boog(unregistered)?

    Aaaah, I miss those guys.

  • quisling (unregistered) in reply to CodeRage
    CodeRage:
    Running "Unit Tests" on a production database?
    CyberShadow:
    TRWTF is using a live database for unit testing. What if a bug in the test caused it to actually delete all rows from the table?
    joe:
    TRWTF is that she ran her tests against a live database...
    Well, yeah, either that, or QA takes and restores to dev/test labs prod .baks every [unit of time]. Not overly safe from the PII standpoint, but sure helps with the environmental differences. It's not like you're hiring ex-cons or anything, right? YMMV.

    Disclaimer for TRWTF howlers: I don't do this. Just seens it thataway.

  • (cs) in reply to sino
    sino:
    Someone who can't be bothered to login from work:
    I belong to a group which once changed the members' forum so you couldn't mention the name of one of an ex-member.
    Was it Spectateswamp? Ol' swampy? TopCod3r? T0pC0der? boog, boog(unregistered), boog(unregistered) and boog(unregistered)?

    Aaaah, I miss those guys.

    We all miss boog.

    But as far as ex-members go, you know who I haven't seen in a long freakin' time? Gene Wirchenko. Always signing his comments. Whatever happened to that guy?

  • Tharg (unregistered)

    As has sadly only been mentioned once or twice, the correct solution is to control deletion by permissions. That way the VP's supposed "rogue process" is handled too.

    W.R.T. the cost of disks etc. some folks seem to have forgotten that systems exist to deliver business benefit for a company, and not to keep developers employed.

    If by ageing vast amounts of data out to cheap commodity drives, you can save money on expensive disks, then you damned well ought to. The fact that disk is cheaper than ever before, does NOT mean that a Terabyte gobbling application should not have cheap storage for such things as audit.

    And finally, databases exist to store information. If you can find a way to do that with less data than before, well, that's called efficient design. Storing Terabytes of dross data does not mean that information useful to the business is being retained.

  • Anonymously Yours (unregistered) in reply to boog
    boog:
    Anonymously Yours:
    An intentionally broken piece of code was put into a trigger to stop delete commands. The fact that we don't know if that was the initial purpose is irrelevant.
    How is backstory irrelevant? The initial purpose of the trigger appears to be for auditing. The bug is a pretty common bug (swapping :new and :old), and was likely unintentional. And the fact that a bug was fixed, then purposely reintroduced makes the WTF even funnier/stupider.
    I meant it was irrelevant to the accuracy of my summary. However, I agree that it's even worse that it was known bug that was re-introduced; I've had to re-break code I fixed because the flow was dependent upon the malfunction and it annoyed me I wasn't given time to correct the flow instead. Of course, it's always possible this trigger was initially created with a bug in it because the author did not know the appropriate way to disable deletions.
    boog:
    Anonymously Yours:
    If this trigger only existed in a test environment the VP of security wouldn't have stepped in to complain about "rogue processes" (his words, not mine, from the article).
    I didn't say the trigger only existed in a test environment, I said that the unit tests were probably running in a test environment.
    The reason I believed this trigger made it into production is I couldn't fathom the VP of Security monitoring the triggers protecting a QA database. It seemed more likely to me it existed in production and was copied into the test environment on a resync than that someone would give a damn about the sanctity of data that is likely regularly overwritten. Of course, that's not impossible either...
    boog:
    Anonymously Yours:
    Now, ultimately we do not know if the trigger override code made it into production or not. Assuming it did not then the author has still created a WTF to counter another WTF. Jody has made a test unit that does not accurately reflect what is happening in production, which could lead to code/procs/etc. being promoted that shouldn't be, given that the real behavior of said work cannot be observed until it reaches production.
    I partially agree with you on this. First of all, I do think it's ludicrous that you're suggesting that Jody's unit testing code might make it into production, but if you are right then it's a huge WTF regardless of overriding the trigger.
    Well we don't really know where Jody's override wound up. This is sort of a multi-tiered WTF wrapped in an enigma and shrouded in terrible WTF possibilities.
    boog:
    Second, without knowing more about Jody's test script, it's hard to say whether altering the configuration while unit testing is a WTF. For all we know, all of the actual tests are complete before Jody disables the trigger, deletes the test data, and re-enables the trigger. In that case, the tests themselves may still accurately reflect production.
    Agreed, though it all comes back to the joys of not knowing the context. There are so many ways this story could be the tip of the iceberg.
  • (cs) in reply to Anonymously Yours
    Anonymously Yours:
    I've had to re-break code I fixed because the flow was dependent upon the malfunction and it annoyed me I wasn't given time to correct the flow instead.
    I've decided that whenever bugs are intentionally re-introduced, a WTF is involved somewhere. It may be in the code, configuration, third-party apps, or management, but there's a WTF somewhere.

    And I can certainly sympathize with the annoyance of having to revert fixes because something (or someone) was dependent on the bug.

    Anonymously Yours:
    Of course, it's always possible this trigger was initially created with a bug in it because the author did not know the appropriate way to disable deletions.
    It's possible, but I think swapping :new and :old is a pretty simple mistake. It's hard to say for sure why it happened. At the very least, I think we can agree that the resulting disabled-deletion state did nothing to facilitate the observation that table permission definitions were inadequate. :)
    Anonymously Yours:
    Agreed, though it all comes back to the joys of not knowing the context. There are so many ways this story could be the tip of the iceberg.
    Very true; considering they had a VP who insisted that they re-introduce a bug, I'm sure the organization is a wonderfully-WTFy WTF-land chock full of WTF-goodness. If I was Jody, I'd have grepped the codebase/commit-logs for that VP's name, and then I'd have a whole slew of WTFs to submit.
  • (cs) in reply to evil mutant zombie plasmoid
    evil mutant zombie plasmoid:
    ... We zombies have to stick together. ...
    Zombies have to stick together so they don't fall apart.
  • Grumpy (unregistered) in reply to mea37
    mea37:
    Preventing deletes in a test environment (or, alternatively, testing in a production environment) is a WTF. Test code must be free to do whatever it needs to set up test cases.
    The rest I agree with fully but not everything outside the parenthesis. If the test environment does not react like the production environment on delete you have no idea how your code will react to being told to hop off - crash, go and sulk in a corner or (praise be!) tell the user something sensible? So the test environment should react like the production environment *and* have one or more cleanup procedures run as root without triggers for removing crud after testing, e.g. a reload of a Known To Be Good dataset. If you need to modify the dataset before testing, that's also run separately as root without triggers. Otherwise you have no idea what the result of the test means.
  • cappeca (unregistered)

    I have - yes, I shamefully admit - implemented a trigger in order to trap a bug on the application, that we simply couldn't find after more than a year. There was a problem with product data once in a while where sometimes it would be written with wrong decimals, 12,00 instead of 1,20, 49,00 instead of 4,90 and so on. This happened quite rarely, but only enough so we simply were not prepared, and all too often to upset our customers. We had no clue about how it happend.

    So we made this killer trigger, it would give up everything that had and was happening on the database at the precise moment new.value was 10 times bigger than old.value. We even put some arcane Oracle functions in it. It was fully documented so it wouldn't be dropped by accident. It was awesome, this bug was DEAD!

    It's been up for six months, and was never fired. The problem has never happened again.

  • chris (unregistered) in reply to Gruntled Postal Worker
    Gruntled Postal Worker:
    Of course, depending on a bug is not a good idea, but...
    "As an interesting side note, there are about 1.1 million rows in the PRODUCTS table. 700,000 of these products have not been sold in over a year. In other words, more than 50% of the data in the PRODUCTS table is obsolete, but can't be purged! Sheer genius!"

    Seriously, why would you want to delete product records? <snip> I don't know the OP's business case, and deleting the records might be entirely valid is this case, but usually if you don't need records any more, you mark them 'disabled' rather than attempting to change history.

    I agree, I think soft-deletes (e.g. [status] = 'DEL') would be fine here. 700,000 obsolete records isn't a lot nowadays, and as other people have pointed out, if you might one day need to refer back to them then an archive table or whatever is going to slow down creation and execution of queries.

  • Jay (unregistered)

    There's some data that it makes sense to keep forever, like data that changes over time and for which the value at any given time in the past may be needed. For example, in a retail system, we probably want to know the price of a product at the time it was sold, so if a customer returns it and asks for a refund we know how much to give him.

    But there are also many cases where old data is not particularly useful. Like, if a customer changes his phone number, do we really need to know what his phone number was at the time of his purchase 5 years ago? If there's a problem with that sale, are we going to call the phone number he had at the time of the sale, or the phone number he has now?

    In a case like that, keeping old data doesn't just waste disk space, which is, indeed, getting less and less important as disk space gets cheaper. It also makes the schema more complicated: The key to the customer record (or wherever the phone number is stored) now has to include a timestamp. Queries are more complicated: At a minimum, we now have to get the record with the right customer id AND the latest timestamp instead of just the right customer id. In queries that involve joins there is now the possiblity of multiple records from this table matching. In the simple case we could just "order by updatetimestamp desc limit 1", but what if there are joins to multiple tables that all have these timestamps? Now we have to add a host of order-bys and limits or distincts and the whole thing becomes a pain. And sooner or later someone will make a mistake and we'll have sales being counted twice or something.

  • Jay (unregistered)

    The poster doesn't say what his business is, but it doesn't necessarily follow that just because a product hasn't been sold for a year that it is obsolete.

    Suppose that among the products you sell are Christmas decorations. (Am I allowed to say that? Or am I supposed to say "The Winter Holiday That May Not Be Named"?) The fact that you haven't sold one of these for 11 months would probably mean nothing at all. If you haven't sold one for 12 months maybe it's just timing: The last one you sold was December 18, 2009, it's now December 19, 2010, you might still sell some this season. If it's supposed to be a high volume product that might indicate a problem. But if it's a high-ticket item and you normally sell only a couple a year, maybe that's fine.

    Or if you sell snow shovels, and there was no snow in your area one winter, you might go almost two years without selling one.

    I certainly wouldn't go automatically deleting everything that hasn't sold in 12 months.

  • Jay (unregistered)

    Why was the month of the code change redacted? I mean, I understand redacting the VP's name. You don't want someone to call him and say, "Hey Fred, I see an employee of yours just made a post about you on the web saying that you're an idiot." But why redact the month? Are you afraid that someone will track down the VP involved by getting a list of all VPs who ordered code changes in September? (I think I saw them solve a crime that way on CSI once ...) Or are you afraid of offending all the June-lovers who would be outraged at the thought that a foolish code change was made in their favorite month?

Leave a comment on “It's Not A Bug, It's Data Security”

Log In or post as a guest

Replying to comment #:

« Return to Article