• achille (cs) in reply to rbriem

    rbriem:

    The INNER JOIN suppresses any records from either table that don't have a matching key field in the other (pretty sure - been a while since I SQL'ed); IIRC, this is the default behavior for some databases, so this *may* be functionally equivalent.

    Most databases will optimise the query and execute exactly the same in both cases.... 

  • Oliver Townshend (unregistered)

    there's nothing necessarily wrong with a loop rather than one insert statement.  Advantages of using a loop including the ability to log results, audit, trap errors etc (although none of those are done here).  The WTF's are:

     1) The loop isn't optimized to be a fast-forward read-only cursor which would shoot through the records quickly.  It also returns too many fields rather than those needed.

     2) The real error - the SQL command is recreated every time through the loop.  It should be created and prepared once, and then the parameters populated each time through the loop.  Ideally it should be some sort of builtin SQL append command structure, rather than a standalone SQL Insert command.

    Oliver TOwnshend

  • BtM (unregistered) in reply to ParkinT
    ParkinT:

    Isn't that typical?

    Don't solve the software problem.  Just throw more hardware at it and hope it keeps working!

     

    How many times have you seen an example of this in your career?

    Yes, I am asking for some responses to the question.

    Long-winded example: my first real job out of college was working on a simulator for the Navy.  My company was responsible for the acoustic modeling code (an unholy marriage of Ada, C, and Fortran, but that's another story).  Another contracter was responsible for the 3D graphical display, which looked really cool but wasn't quite fast enough.  We were asked to see if we could speed it up.

    This was absolutely the worst C code I have ever seen someone get paid for.  The entire system was a single main() function, around 5000 lines in length, with several hundred distinct variables, half declared within main(), the other half at file scope.  Files containing fragments of C code were #included in random places throughout the source.  The <strike>primary</strike> only control structure was goto (about 13 of them, branching pretty much at random).  It took a coworker of mine over 2 weeks to figure out the flow of control.  It was also brittle as hell; trying to fix a problem for one execution path introduced bugs everywhere else. 

    It was clear from the outset that the only viable answer was to scrap the whole thing and build a new one from the keel up, which we had neither the time nor budget to do.  So we simply tried to build it with optimization turned on (-O2, I think).  The compiler ate up all the physical RAM, then all the swap, and finally panicked the system. 

    We told the customer that we couldn't make it go any faster without doing it over from scratch.  So they just bought faster hardware. 

  • emurphy (cs) in reply to RH

    Anonymous:

    I'm not defending the WTF code, but CodeRage, you missed the fact that the data is coming from several tables, not just one.  They were cherry-picking values from across the data model and inserting them into a single record.  A good coder with deficient data modeling skills would see this as a good solution.

     

    Well, due to the different normalization constraints (see my previous comments), you'd still want to pick some values from across the data model and insert them into a single record - though those values should be limited to e.g. "user U1's ID" rather than "user U1's ID, SMS number, first and last name"; the extra info should instead be picked up by joining the user table to the queue table.

     

  • consultant - stress the first syllable (unregistered) in reply to jaspax
    jaspax:

    I'm sure that somewhere out there are competent, proefessional, and dedicated consultants that really are worth every bit as much as they're paid. However, reading this site almost convinces me that they don't exist. I'm developing a bitter prejudice against outsourcing and consulting. Which is bad, because my boss is trying to convince me that we should outsource our next component...

     (Ah, but our business is one giant WTF, so what do I care?)
     

    I've seen one once. Honestly.

  • kbiel (unregistered) in reply to BradC
    BradC:

    So are these two SQL statements functionally identical? (and identical speed to run?)

    SELECT *
    FROM table1, table2
    WHERE table1.key = table2.key

    and

    SELECT *
    FROM table1
    INNER JOIN table2
    ON table1.key = table2.key

    The first version is old-school MS SQL Server.  There was a time that MS SQL Server (and Sybase SQL Server) did not support joins directly.  All tables to be joined where listed in the from clause and the where clause contained any logic for joining tables.  Today, MS still supports the first version, but tries to optimize it as an inner join in its execution plan.  That's why the consultants had to throw in all the [table].* crap, so that it would slow it down and hide the fact that the old-school join is just as fast as doing it the right way. ;)

    The consultants were clearly afraid of SQL and tried to move as much logic into the code as possible.  I could write this spamware in about an hour using a SQL agent and writing a multithreaded client service to poll a queue table, taking the top 10 or 20 items to be processed by a thread.  The slowest part of this whole program should be the actual SMS text sending routine.
     

  • utu (cs) in reply to maweki
    maweki:

    JOIN is evil, indeed very evil.

     Puh-leese... databases were born to JOIN :)
     

  • NZ'er (unregistered) in reply to consultant - stress the first syllable
    Anonymous:
    jaspax:

    I'm sure that somewhere out there are competent, proefessional, and dedicated consultants that really are worth every bit as much as they're paid. However, reading this site almost convinces me that they don't exist. I'm developing a bitter prejudice against outsourcing and consulting. Which is bad, because my boss is trying to convince me that we should outsource our next component...

     (Ah, but our business is one giant WTF, so what do I care?)
     

    I've seen one once. Honestly.

     

    Yep, and I saw Elvis walking down the street just last week! 

  • Rich (unregistered)

    the ones that don't automatically restart when the computer is rebooted.

     

     

    WTF? Have you never heard the startup folder in the start menu?

     

    :D 

     

    Rich 

  • RH (unregistered) in reply to emurphy
    emurphy:

    Anonymous:

    I'm not defending the WTF code, but CodeRage, you missed the fact that the data is coming from several tables, not just one.  They were cherry-picking values from across the data model and inserting them into a single record.  A good coder with deficient data modeling skills would see this as a good solution.

     

    Well, due to the different normalization constraints (see my previous comments), you'd still want to pick some values from across the data model and insert them into a single record - though those values should be limited to e.g. "user U1's ID" rather than "user U1's ID, SMS number, first and last name"; the extra info should instead be picked up by joining the user table to the queue table.

     


    Ah yes, you are correct.  I suppose the way I ordered my comments would lead one to believe that I was against a limited amount of cherry-picking.  Of course in any computer system, if information is not synthesized from data on-hand, it must have been copied from elsewhere in the system.
  • phil (unregistered)
    Nope, y'all have it wrong completely.  Yeah, there are inserts in a loop but Java can sort that sort of thing out pretty easily.  THIS is the WTF: 
    string sqlCmd = 
    "SELECT o.*, r.*, ..., c.*, u.* " +
    " FROM Orgs o, Recipients r, Contacts C, " +
    " ... " +
    " Groups g, Users u " +
    " WHERE m.Group_Id = g._Group_Id " +
    " AND u.User_Id = gu.User_Id " +
    " AND ... " +
    " ORDER BY o.Priority_Code, o.Delivery_Num, ...";
    Just how many tables are lurking in the '..."?  I imagine more than one or two.  An actual data model wouldn't go amiss here, and a decent conversion to a physical implementation.
     
     
  • CodeRage (cs) in reply to RH
    Anonymous:
    CodeRage:

    Honestly, I think the real WTF here is using the technique of copying a bunch of rows from one table to another to indicate a change in status. This sounds like a really stupid design.  I guess something like this might need to be done if there were two different databases and systems involved, but the optimization done by our hero would indicate that these two tables are in the same database.

    Why not use just ONE table, and have some sort of STATUS field on it, and set such status field to "SENT" or similiar after each messages was processed/sent/whatever?  Really, given the need to fix the problem quickly, our hero's solution was great, but WTF, what kind of design is this in the first place?

    Please fill me in if I'm completely missing something here!

     


    I'm not defending the WTF code, but CodeRage, you missed the fact that the data is coming from several tables, not just one.  They were cherry-picking values from across the data model and inserting them into a single record.  A good coder with deficient data modeling skills would see this as a good solution.

    However, since this is a "batch" and the data seems to be coming from far and wide, I would imagine that there is a whole lot of data duplication here.  A single record probably looks something like this:  
    (sender, recipient, time_sent, subject, message, etc.)  I draw this conclusion from the OP: "...the web UI would send the service a message with a list of recipients and the service would convert it to individually-addressed SMS messages and then send the messages to the appropriate queue."  The last three columns in the record are dupes.

    Additionally:  3 hours for 2000 records....16 rows a minute?  BS.  Something else must have been wrong.  Even 2000 records in 5 minutes sounds slow on a reasonably busy system.  And don't respond with "Well, perhaps the sender service was hammering the DB with updates and deletes."  It couldn't hammer the DB; It's quite obvious that they couldn't populate the queues fast enough to give the sender service any considerable amount of work to do to generate your scape-goat updates and deletes.

    I appreciate the replies.  What I am missing here is that with the original WTF code, and the optimized solution, nowhere do you see the 'unprocessed messages' deleted or modified in any way to indicate that "this message has been placed in queue".  The one line SQL may be great at populating the queue, but what keeps it from populating the messages over and over?

    If the point of the queue is to make the bulk send of messages as simultaneous as possible, I'd probably do the big select query, writing each message to a flat file, and then mark each record as "queued".  Then, notify a second process to grab that flat file, and blast the messages as fast as possible.  If flat files aren't fun, use a queue table, I guess, but still, something needs to be done to mark the messages in the first query as sent, queued, whatever.

  • mathew (unregistered) in reply to CodeRage
    CodeRage:

    The one line SQL may be great at populating the queue, but what keeps it from populating the messages over and over?

    ACID? (Specifically the A.)

  • savar (cs) in reply to Red5
    Red5:

    "The client instead bought a much faster and clustered database server."

    If all else fails, just throw more money at it.  That's the real WTF here.

    The real WTF is that the client has that much money, and yet JT is making a lowly contractor's wage. I don't see stupid queries like that where I work (we have a good review process), but if I did, I would certainly be vocal about it. Don't take no for an answer. If somebody tells you to shut up, take it over their head. There is SOMEBODY in the company who will appreciate saving money on hardware by writing better code.

  • utu (cs) in reply to zip
    zip:

    This bothers me too.  5 minutes for 2000 insert commands?  That seems really slow.  I must be missing something.

    You are only seeing a single transaction of a single client taking place here. Normal gateways handle thousands of transactions all the time for thousands of different users. It's all about scaling; when the count of concurrent transactions increases, *any* system slows down. When there are other transactions taking place, there are some that need to "wait their turn"; more transactions, more waiting.

    Furthermore, as has been stated earlier, fetching a set of rows from server to client only to return them one by one back to the server is a non-scalable solution (the original one written by the consultant). However, if we can make one query to perform a bulk insert of the rows we want *all within the server*, we decrease the job required ... from 2001 executed queries (each of which, in the worst case, need to wait for some time before they actually get a chance to execute themselves) to just one query. The best way to optimise the job to be done is not to do the job at all :) and I don't mean that the actual work should be left undone; with one query we do just as much work as the original 2001 queries, but we had much less job to do.

    Because we had much less job to do, there was much more time on the database end to do other things. This meant that we were able to serve many more clients at the same amount of time. If all the clients behave the same way, the performance gain is exponential rather than linear.

    Still, as a conclusion; having seen how some operator end SMS gateways have been built, I think it's rather amazing SMS works as well as it does these days. There are plenty of similar WTFs to be found on those systems :D

  • CodeRage (cs) in reply to mathew
    matthew:
    CodeRage:

    The one line SQL may be great at populating the queue, but what keeps it from populating the messages over and over?

    ACID? (Specifically the A.)

    ACID, are you on it?  Yes, we insert a bunch of records into QUEUE by running that query.  Someday, oh someday, we may want to to populate that queue again with more unsent messages.  When we run that query again, it's going to pull up those same records again (in addition to any new ones).  Unless we do something to remove them or mark them as sent in the tables of the first query, which I don't see happining here.

  • Very Slow Service (unregistered) in reply to utu

    So what is the difference between LEFT JOIN and INNER JOIN?  Or are they the same, and it's just a dialect thing?

  • savar (cs) in reply to brian j. parker
    Anonymous:

    Not that this is related to the WTF, but... most modern databases will treat those to queries identically.  The second form is usually preferrable, though, since some outer joins will be ambiguous when expressed that way.  Also, at least in my opinion, the second form is easier to understand for joins with many tables.

    No ambiguity, at least not in Oracle...you can use the (+) operator to make outer joins:

    SELECT t1.col1, t2.col1 FROM table1 t1 table2 t2 WHERE t1.pk (+) = t2.pk (+)

     This is a full outer join..You can make a left join by removing the (+) on the right hand side I believe.

     Still, ANSI syntax is clearer and preferred by most developers I know. As somebody else stated, this syntax is an easy way to miss a join key and get a Cartesian join between two tables.

  • James (unregistered) in reply to adamsane

    However MSSQL 2005 does not permit the use of the first one by default.

  • RogerC (cs)
    Alex Papadimoulis:

    The first thing they noticed when they came in was that the service was not running as a Service at all -- it was running as an Interactive Windows Application. I'm talking the GUI kind of application; the ones that don't automatically restart when the computer is rebooted. That explained stability problem number one.



    But ... but ... but... if it's not a GUI program, how would the consultants get the PFK program to work with it??? 

  • lomaxx (cs)

    gotta hate it when you have a superior solution, but not a superior enough position to get it put into production

  • RogerC (cs) in reply to Red5
    Red5:

    "The client instead bought a much faster and clustered database server."

    If all else fails, just throw more money at it.  That's the real WTF here.

    Where "all else fails" is defined as "we have a good solution but we don't like where it came from". This attitude alone is sufficient to keep mankind from ever reaching the stars.

  • John Smallberries (cs) in reply to Jeff S
    Jeff S:
    zip:
    Anonymous:
    Anonymous:
    Anonymous:

    So what I don't get (don't know much about databases myself) is:
    Why does the non-WTF query still take nearly five minutes for 2000 SMS on a presumably
    large server? A SMS is not longer than ~160 characters afaik, that's not THAT
    much of data.
     

    The wtf is the loop to make individual inserts.  (If I'm reading correctly.) 

    Yes, sure. I was talking about the sane and optimized version of the query... ?

    This bothers me too.  5 minutes for 2000 insert commands?  That seems really slow.  I must be missing something.

    In addition to the slowness caused by what I mentioned in my previous post, I can only guess that maybe a cross join was involved accidentally in the sql statement constructed .. we can't see the whole thing, but it is doing all joins in the WHERE clause so it's very possible.  That's perhaps the biggest consequence of not using ANSI syntax -- it is not always clear when all tables are not joined properly and it is very easy to miss a relation or two.

    But only the insert was optimized in the fixed version; the cartesion product problem would still be there. Also, only 2000 inserts were performed (or so it's stated), so this probably wasn't the issue. It still doesn't add up.

    FWIW, I think the ANSI syntax is far easier to read, and makes your intentions clear. 

  • Steamer25 (unregistered) in reply to John Smallberries

    I think the implication from the post was that the application was downloading a crap-ton of data from the database server only to re-force it through the limited-when-compared-to-that-of-the-fibre-channel-array-on-the-database-server bandwidth of the local network. I chalk the fuzziness in the numbers ("2000 SMS messages shouldn't take that long") up to anonymization and WTF code else where in the system.

  • achille (cs) in reply to Very Slow Service

    Anonymous:
    So what is the difference between LEFT JOIN and INNER JOIN?  Or are they the same, and it's just a dialect thing?

     So if you have a table with prisoners:

    INMATE NUMBER   NAME
    -------------   --------
     1              JOHN DOE
     2              BILLY BOB
     3              MOHAMMAD SMITH
    

    and you also have a table listing crimes:

    INMATE NUMBER CRIME


    1 Robbery 2 Embezzlement 2 Credit Fraud

    In this case inmate number thre does not have a crime listed for inmate 3, a regular join would fetch:

    NAME CRIME


    JOHN DOE Robbery BILLY BOB Embezzlement BILLY BOB Credit Fraud

    A left join would keep all side on the left

    NAME CRIME


    JOHN DOE Robbery BILLY BOB Embezzlement BILLY BOB Credit Fraud MOHAMMAD SMITH (null)

  • Zygo (unregistered) in reply to BradC
    BradC:

    So are these two SQL statements functionally identical? (and identical speed to run?)

    SELECT *
    FROM table1, table2
    WHERE table1.key = table2.key

    and

    SELECT *
    FROM table1
    INNER JOIN table2
    ON table1.key = table2.key

     I usually use the second, but the example uses the first--is this a secondary wtf? Or just a usage convention difference?

     One is ANSI standard SQL, and the other is ANSI standard SQL from a later version of the standard.  In the Before Time there was only inner join, later the various kinds of outer join were added and "INNER JOIN" was added for symmetry.  The word "INNER" is a noiseword that is mostly ignored, so "JOIN" without qualification and "INNER JOIN" are identical.

    Both are supposed to be semantically identical, although some servers will treat one of the two forms as a hint that the joins should be done as nested loops with iteration over one table or the other on the outer loop.

  • Nobody in Particular (unregistered) in reply to Very Slow Service

    Anonymous:
    So what is the difference between LEFT JOIN and INNER JOIN?  Or are they the same, and it's just a dialect thing?

     

    No, they are very different. LEFT JOIN is synonymous with LEFT OUTER JOIN... basically, all items in the LEFT table will be present, even if there's nothing in the RIGHT table for them to JOIN to. RIGHT OUTER JOIN is just the opposite, but since left and right are arbitrary (you can always swap the order you specify the tables in), usually only LEFT OUTER JOIN is ever used.

    INNER JOIN, on the other hand, will return only results that have a non-null counterpart.

     

    Say you want a list of all customers and their most recent order. An INNER JOIN would do exactly that, and would leave out customers with no orders. A LEFT OUTER JOIN will return customers with no orders, too.

  • Zygo (unregistered) in reply to Cody
    Anonymous:
    Anonymous:
    Anonymous:

    So what I don't get (don't know much about databases myself) is:
    Why does the non-WTF query still take nearly five minutes for 2000 SMS on a presumably
    large server? A SMS is not longer than ~160 characters afaik, that's not THAT
    much of data.
     

    The wtf is the loop to make individual inserts.  (If I'm reading correctly.) 

    I'm obviously not reading correctly.  The non-wtf still takes a long time because it's still a crapload of data to move?  Maybe there's more info we don't see?  I dunno.

     

    captcha = bedtime.  I wish. 

    The non-WTF query still has to do the SELECT.  Maybe there's millions of rows in the SELECT that have to be scanned, filtered, and boiled down to 2000 output records.  I wouldn't be surprised if there e.g. wasn't an index on one of the join columns in one of the smaller tables. 

     
    I would expect that the 2000 inserted records take maybe 1-10ms to execute depending on server load, the other 299,999ms would be spent doing the SELECT.

     That does imply that the 2000 INSERTs take 175 minutes to execute though.
     

    Another possibility is that there's some nasty locking issue going on with concurrent transactions.  I've seen systems where some client somewhere grabs hold of a lock on some row in some table, and *everything* grinds to a halt until that client (which for some reason is always the slowest client ;-) finishes whatever it was doing.  Maybe the big SELECT has to wait until a moment where nobody is sending any SMS messages before it gets to start executing.

  • same anon as last time (unregistered) in reply to Nobody in Particular
    Anonymous:

    Anonymous:
    So what is the difference between LEFT JOIN and INNER JOIN?  Or are they the same, and it's just a dialect thing?

     

    No, they are very different. LEFT JOIN is synonymous with LEFT OUTER JOIN... basically, all items in the LEFT table will be present, even if there's nothing in the RIGHT table for them to JOIN to. RIGHT OUTER JOIN is just the opposite, but since left and right are arbitrary (you can always swap the order you specify the tables in), usually only LEFT OUTER JOIN is ever used.

    INNER JOIN, on the other hand, will return only results that have a non-null counterpart.

     

    Say you want a list of all customers and their most recent order. An INNER JOIN would do exactly that, and would leave out customers with no orders. A LEFT OUTER JOIN will return customers with no orders, too.

     

    Thank you. 

  • Nobody (unregistered)

    I'm just glad to know that the people selling those idiotic joke SMS services are just as dumb as their clients.  Of coruse, I suppose that means that they really understand their market...  :-)

  • RogerC (cs) in reply to RogerC
    RogerC:

    But ... but ... but... if it's not a GUI program, how would the consultants get the PFK program to work with it??? 

    Sorry, that should have been PFB (Press the Freakin Button) instead of PFK. 

  • An apprentice (unregistered) in reply to Rich
    Anonymous:

    the ones that don't automatically restart when the computer is rebooted.

    WTF? Have you never heard the startup folder in the start menu?

    :D 

    Rich 

    I think programs in the startup folder (and registry Run* entries) are not processed until someone actually logs in. While a service can be run in the background even without logged users.

  • emurphy (cs) in reply to CodeRage

    CodeRage:

    What I am missing here is that with the original WTF code, and the optimized solution, nowhere do you see the 'unprocessed messages' deleted or modified in any way to indicate that "this message has been placed in queue".  The one line SQL may be great at populating the queue, but what keeps it from populating the messages over and over?

    If the point of the queue is to make the bulk send of messages as simultaneous as possible, I'd probably do the big select query, writing each message to a flat file, and then mark each record as "queued".  Then, notify a second process to grab that flat file, and blast the messages as fast as possible.  If flat files aren't fun, use a queue table, I guess, but still, something needs to be done to mark the messages in the first query as sent, queued, whatever.

     

    Obviously the original system must have had some sort of "don't queue things more than once" functionality, we just weren't shown it because

    1. it wasn't a WTF
    2. it was more of the same type of WTF
    3. it was a WTF, but not as impressively bad as what we were shown
    4. it was implemented, not as "mark the stuff we just queued as already-queued", but as "queue only stuff with a create-date newer than the last time this routine ran", and is thus part of the ... in the WHERE clause
    5. other (please specify)
  • emurphy (cs) in reply to Zygo

    Anonymous:

    I've seen systems where some client somewhere grabs hold of a lock on some row in some table, and *everything* grinds to a halt until that client (which for some reason is always the slowest client ;-) finishes whatever it was doing.

     

    That often turns out to be "the other clients also grind things to a halt, just not for long enough to be noticeable".

     

  • Rank Amateur (cs) in reply to Red5
    Red5:
    Anonymous:

    I want my new title to be:

    "Grand Imperial PooBah of all Things Great and Small"

    LOL, You picked up on another minor WTF, job titles and their ambiguous meanings.

    I choose  "Commander of the Known Universe" for my next title :)

    My former roommate once went to a conference where he had them put "Supreme Hosehead" as his title on his name tag. I don't think anyone said anything.

    --RA

  • nutscrape (unregistered) in reply to CodeRage
    CodeRage:

    I appreciate the replies.  What I am missing here is that with the original WTF code, and the optimized solution, nowhere do you see the 'unprocessed messages' deleted or modified in any way to indicate that "this message has been placed in queue".  The one line SQL may be great at populating the queue, but what keeps it from populating the messages over and over?

    It requires three lines, but its pretty easy to do using atomicitiy and transactions:


    1. Select out the messages destined for the queue out of the input join into a temporary table.
    2. Delete or flag those rows from the input table as being queued.
    3. Insert from the temp table into the queue table. 

    Or without using a temp table:

     1. INSERT ... SELECT .... into the queue table, use a column in the queue table to flag a row to indicate that it is newly added.
     2. Flag or delete rows in the input tables if they're in the queue marked with the newly added flag.
     3. Remove the newly added flag from the rows in the queue table.
     

  • brandon (unregistered)

    Unfortunately, this problem of inefficient sql algorithm, if you will, is all too common for anyone's comfort. I had once seen code just like the example provided today where invoice and billing generation took well over 6 hours for just roughly 20,000 records. Now, initially, the business thought it was due to the volume of records being processed.

     I , however, felt something was wrong and when I looked under the hood, I saw code that resulted in n*n processing. The 20,000 records were obtained into 1 dataset. A loop is done for each of these rows and within that loop, a call is made to the database to find out if the record was suitable for billing and invoicing.

    This in effect meant 20,001 calls to the database just for the pre-process phase. The process phase was equally interesting. The rows that qualified for processing was looped through individually, and then business logic with database calls were made for each and every one of these rows.

    Bottom line is that I charted and deduced that there were roughly 150,000 separate calls to the database for the 20,000 rows of sample data. This resulted in the process taking 6 hours. I re-designed the code to take the most direct route to obtain data, plus the use of Readers (both SqlDataReader and the XmlReader) to generate the invoices in under 30 mins for the same 20,000 rows.

    Problem was that the business has no budget to FIX this flaw and had no unit testing in place to ensure that after the changes were made, everything else checked out. Plus the programmer that was responsible for this big WTF was very touchy about enhancements to his work since that would put him in a very bad light.

    Well, the business unsurprisingly ran into financial difficulties and had to let go all of us consultants.

    Moral of the story: When you see WTFs like today, the cause of them is often POOR MANAGEMENT

  • Doer of Everything and Nothing (unregistered) in reply to NZ'er

    Where I work we have successfully used outsourcing for some jobs (not programming).  It only works when you have someone dedicated to Q&A demanding quality work and providing the company what they need for that quality work.

  • Cheong (unregistered) in reply to adamsane
    adamsane:

    Yes, they are identical functionally.

    However, on some db systems they will work differently.

     Command #1:

    1. Join all rows together
    2. Use the where clause to exclude rows
    3. Return the result set

    Command #2:

    1. Join only the rows were the On clause is true
    2. Return the result set

     Most of the large (expensive) DB like oracle and MSSQL will convert this to the sane execution plan.

     

    Yes. In MySQL(as of 4.X) the second statement is almost always faster than the first one at some degree.

    Haven't studied if there's optimization in 5.X yet. 

  • Cheong (unregistered) in reply to SJ

    Anonymous:
    The WTF is if the contractor who is more qualified than the consultants remains a contractor.

    It's not uncommon that experienced nurses know more than new grad. (preheps even moderate experienced) doctors, yet those nurses won't become doctors.

    It's educational background that counts...

  • foxyshadis (cs) in reply to nutscrape
    Anonymous:
    CodeRage:

    I appreciate the replies.  What I am missing here is that with the original WTF code, and the optimized solution, nowhere do you see the 'unprocessed messages' deleted or modified in any way to indicate that "this message has been placed in queue".  The one line SQL may be great at populating the queue, but what keeps it from populating the messages over and over?

    It requires three lines, but its pretty easy to do using atomicitiy and transactions:


    1. Select out the messages destined for the queue out of the input join into a temporary table.
    2. Delete or flag those rows from the input table as being queued.
    3. Insert from the temp table into the queue table. 

    Or without using a temp table:

     1. INSERT ... SELECT .... into the queue table, use a column in the queue table to flag a row to indicate that it is newly added.
     2. Flag or delete rows in the input tables if they're in the queue marked with the newly added flag.
     3. Remove the newly added flag from the rows in the queue table.
     

    Why be so circuitous? Just drop the queue table and recreate it every time. Make it temp, if you have the memory. When the SMS engine starts, it'll drop and populate the queue, then start pumping them out and marking them sent in the main database, and when done (or nearly done) can start the process over. (The SMS engine need not be the SMS server, it should be a load-balancing distributor if there's more than one sending server.)

    Also, a database discussion without Jeff S? What has the world come to?

  • anonymous (unregistered) in reply to Cody

    I don't think the loop is necessarily so crazy, depending on what has been elided in the for block. For example, what if 1 insert in 2000 throws a FK constraint error or otherwise fails? Would you rollback everything in that batch and sit on it while you wait for someone to answer his page and figure out where he fscked up the database design, presumably leaving 2000 undelivered messages, or would you log the specifics of the error and try to deliver the other 1999?

  • JM (the guy) (unregistered) in reply to RH

    To Anonymous (and for the record):

    The "send" table contains the individual SMS messages to be sent; the "outbox" table, OTOH, contains just the text to be sent. The JOINs are necessary to connect the "address book" -- which contains the mobile number to send to. And, the 3 hours it took? It was the fact that they were doing an INSERT for every row to be placed in the "send" table, instead of doing a simpler (and more amenable to optimization by the SQL server) INSERT INTO... SELECT FROM. Why would it be slower?

    • Every INSERT query would have to make a roundtrip from the web server to the DB server (in this case, on separate machines).
    • Every INSERT query itself would have to acquire a lock on the table, perform the insert, release the lock, then return the result to the caller

    Thus the WTF. 

    PS : I long left that company. As for the figures, I don't have the actual/exact ones at hand at the moment, but I do recall the difference was several order of magnitudes -- a difference between minutes/hours to some seconds or minutes. And the 2000 records refers to 2000 rows in the "outbox" table -- which would populate several thousand rows on the send table. (I think I forgot to add that clarification)

  • JM (the guy) (unregistered) in reply to CodeRage

    Both the "outbox" and "send" tables in fact had a status column; I just didn't find it necessary to include it in the submission (as there wasn't anything WTF about it-- just update the column at the end, after doing all sends, or update a row's send status etc.).

     BTW, the original code didn't even wrap all the INSERTs in a single transaction. Go figure.
     

  • Lankiveil (cs)

    Stories like this make Lankiveil angry.  It's got the trifecta: inept consultants, awful code, and lazy management.

  • felix (cs) in reply to BradC
    BradC:

    So are these two SQL statements functionally identical? (and identical speed to run?)

    SELECT *
    FROM table1, table2
    WHERE table1.key = table2.key

    and

    SELECT *
    FROM table1
    INNER JOIN table2
    ON table1.key = table2.key

    Common sense (and my PostgreSQL background) dictates that the second form should run faster. I was quite surprised when MySQL (4.1, as it happens) was faster on the first version by a few miliseconds. Then I remembered the kind of application MySQL is optimized for...

  • Charles (unregistered) in reply to ammoQ

    When they are definitely NOT identical is whhen they are outer joins...

    SELECT *
    FROM table1, table2
    WHERE table1.key *= table2.key

    and

    SELECT *
    FROM table1
    Left JOIN table2
    ON table1.key = table2.key

     

    Then  they can in certaincircumstances actually return different results...

  • woodle (cs) in reply to Charles

    I'm sorry but you're all on completely the wrong track with this one.

    Yes, the original SQL/design wasn't very good, but all good developers were once beginners, and you can't have every development team consist of only experienced people.

    Yes, it probably should have been caught in a code review, but we've all been on projects where unimportant things like technical design documents and code reviews are the first thing to go when time starts running out.

    Yes, the better solution should have seen the light of day but without knowing the politics of why there was this caste structure in place it's hard to know whether there was any better way to arrange things.  Politics is a part of life for almost all projects, deal with it, you'll be a happier person.

    No, boys and girls, The Real WTF™ is that this "solution" made it into production without ever having been load tested with real-world data.  Had there been anything other than rudimentary unit testing done someone would have noticed that the system will be overrun soon after it goes live.  At which point, since it would be a show stopper, some of the more experienced developers or (gasp) a DBA would be brought in to help investigate.

    So The Real Culprit™ was the Project Manager, who should have made sure that the testing regime was geared towards proving that the system will do what it is supposed to.  We are not told whether he/she was a permanent, a Consultant, a Contractor or Paula herself, and it doesn't really matter because there are good and bad PMs everywhere.

  • jwenting (cs) in reply to ParkinT
    ParkinT:

    Isn't that typical?

    Don't solve the software problem.  Just throw more hardware at it and hope it keeps working!

    How many times have you seen an example of this in your career?

    Yes, I am asking for some responses to the question.

    Seen it several times. Happens more frequently if the consulting firm is also the hardware supplier (hint: IBM).

  • jwenting (cs) in reply to ParkinT
    ParkinT:

    Isn't that typical?

    Don't solve the software problem.  Just throw more hardware at it and hope it keeps working!

    How many times have you seen an example of this in your career?

    Yes, I am asking for some responses to the question.

    Seen it several times. Happens more frequently if the consulting firm is also the hardware supplier (hint: IBM).

Leave a comment on “Very Slow Service”

Log In or post as a guest

Replying to comment #:

« Return to Article