• Jonathan Grynspan (unregistered)

    sigh So very sad.

  • (cs)

    sigh

    Some people just never learn...

    First Post!

  • (cs) in reply to Vector

    You can lead a horse to water...

    ...but even if you shove a hose in the horse's mouth and duct tape it close, he can still refuse to swallow.

  • (cs)

    just be glad you didn't get:

    @sql = 'IF @Access_Code = 0
    IF @Archive_Desc IS NULL
    SELECT * FROM [Archive_Letters]
    ELSE
    SELECT * FROM [Archive_Letters]
    WHERE [Archive_Desc] IS NULL

    /* Snip 10 Similar Cases */

    ELSE
    IF @Archive_Desc IS NULL
    SELECT * FROM [Archive_Letters]
    WHERE [Access_Code] = @Access_Code
    ELSE
    SELECT * FROM [Archive_Letters]
    WHERE [Access_Code] = @Access_Code
    AND [Archive_Desc] IS NULL'
    EXEC sp_executesql @sql

  • wtf addict (unregistered)
    The suggestion of dynamic SQL is a wtf in of itself. What's really needed is non-retarded control logic.
  • Fullstop (unregistered)

    I am not familiar with SQL server, but in Oracle we tend to do such things as this:

    (I'm using the SQL server syntax)

    <font>SELECT * FROM</font> [Archive_Letters]
    <font>WHERE</font> (@Archive_Desc is null or [Archive_Desc] = @Archive_Desc)<font></font>

    This way, you just short circuit the where clause if the parameter is null.  If it is not null, it gets pulled into the query.

    It can make for one big ugly statement, though.  I am not sure which is worse, one big ugly statement, or 5 small ugly statements.  :-)


  • Adam Machanic (unregistered) in reply to Fullstop

    Unfortunately, SQL Server will not short-circuit something like that in most cases.  As a result, using control-of-flow or dynamic SQL in a case like that will often result in much better performance.


  • Jason (unregistered)

    I'm not sure how dynamic code makes this more maintanable. ... Besides, if you hard code the SQL statement in a string, how is that dynamic?

    It looks like the real problem here is that the guy didn't know how to use CASE effectively.

  • (cs)

    2 submissions in a row from someone named Adam.
    Adam isn't even a very common programmer name like Steve.
    I better be careful not to name my child Adam lest he get stuck with coworkers who produce WTF code.

  • non-dba (unregistered) in reply to Fullstop

    I always did this in MSSQL by using COALESCE function, such as:

    where t.f = coalesce(@prm, t.f)

    I would avoid dynamic SQL at almost any cost.  I've seen some tremendous performance increases by removing an executed string, no matter how complex the resulting query is.

  • Adam Machanic (unregistered) in reply to non-dba

    Non-DBA,

    Doing that will force a table scan.  This will cause a decrease in performance, not an increase.  Google "sargability" for more information. 

    It's funny that so many developers think that dynamic SQL will cause performance problems; I often use it to fix them.


  • Adam (unregistered) in reply to ferrengi

    <FONT face=Arial>Choose whatever name you want - the WTFs will happen anyway. [8-|]</FONT>

  • (cs)

    At least Adam does code reviews.
    This way he can catch the WTFs before they make it into production.
    There would be a lot less material for Alex to choose from if all code were reviewed.
    What usually happens is that the program is turned it over to a QA person who only looks at how the program functions and does not see or understand the actual code.

  • (cs) in reply to Adam
    Anonymous:

    <font face="Arial">Choose whatever name you want - the WTFs will happen anyway. [8-|]</font>



    No Way!
    Your name is the single most important factor that influences whether you will get stuck with a WTF or not.
    I have proof!
  • (cs) in reply to Fullstop
    Anonymous:
    I am not familiar with SQL server, but in Oracle we tend to do such things as this:

    (I'm using the SQL server syntax)
    <font>SELECT * FROM</font> [Archive_Letters]
    <font>WHERE</font> (@Archive_Desc is null or [Archive_Desc] = @Archive_Desc)<font></font>

    This way, you just short circuit the where clause if the parameter is null.  If it is not null, it gets pulled into the query.

    It can make for one big ugly statement, though.  I am not sure which is worse, one big ugly statement, or 5 small ugly statements.  :-)




    I was thinking the same thing. Dynamic SQL is powerful, but tends to be used to create its own special variety of WTFs, since the code itself varies depending on the data. That can make finding the correct combination of data to generate the buggy SQL very difficult. I used to do this all the time -- formulating SQL in PERL depending on program conditions and executing it. Trouble is that your single SQL statement just turned into a permutation of 20 distinct sql statements. Finding which one is causing issues can be a real b... er.. bummer.

  • (cs) in reply to Adam Machanic

    Anonymous:
    Unfortunately, SQL Server will not short-circuit something like that in most cases.  As a result, using control-of-flow or dynamic SQL in a case like that will often result in much better performance.

    <FONT face=Tahoma>well i ran a very simple test and it can be done, check it out:</FONT>

    <FONT face=Tahoma><FONT color=#006400>-- Create temp table</FONT>
    <FONT color=#0000ff>create table</FONT> #shorcircuit(
    field1 <FONT color=#0000ff>numeric</FONT>,
    field2 <FONT color=#0000ff>numeric null</FONT>
    )</FONT>

    <FONT face=Tahoma><FONT color=#006400>-- Populate temp table</FONT>
    <FONT color=#0000ff>insert into</FONT> #shorcircuit
    <FONT color=#0000ff>values</FONT>(1, 1)</FONT>

    <FONT face=Tahoma><FONT color=#0000ff>insert into</FONT> #shorcircuit
    <FONT color=#0000ff>values</FONT>(2, null)</FONT>

    <FONT face=Tahoma><FONT color=#0000ff>insert into</FONT> #shorcircuit
    <FONT color=#0000ff>values</FONT>(3, 3)</FONT>

    <FONT face=Tahoma><FONT color=#0000ff>insert into</FONT> #shorcircuit
    <FONT color=#0000ff>values</FONT>(4, null)</FONT>

    <FONT face=Tahoma></FONT> 

    <FONT face=Tahoma><FONT color=#0000ff>declare</FONT> @var <FONT color=#0000ff>numeric</FONT></FONT>

    <FONT face=Tahoma><FONT color=#0000ff>select</FONT> @var = null
    <FONT color=#0000ff>select</FONT> * <FONT color=#0000ff>from</FONT> #shorcircuit <FONT color=#0000ff>where</FONT> (@var <FONT color=#0000ff>is</FONT> <FONT color=#a9a9a9>null or</FONT> field2 = @var)</FONT>


    <FONT face=Tahoma><FONT color=#0000ff>drop table</FONT> #shorcircuit</FONT>

  • (cs) in reply to Adam Machanic

    Anonymous:
    Non-DBA,

    Doing that will force a table scan.  This will cause a decrease in performance, not an increase.  Google "sargability" for more information. 

    It's funny that so many developers think that dynamic SQL will cause performance problems; I often use it to fix them.

    Google gives me:

    <FONT color=#cc0000>Did you mean: </FONT><FONT color=#0000cc>shagability</FONT>  

    hmmmmm, prehaps I DID!

  • Adam Machanic (unregistered) in reply to mozka
    mozka:

    <font face="Tahoma">well i ran a very simple test and it can be done, check it out:</font>




    All your test shows is that OR can be used to logically chain predicates.  It does not tell us whether or not the query engine evaluates both predicates for each row of the table.


  • (cs)

    By using executesql your sql code in the stored procedure is never compiled and gets a new query plan everytime it is executed, completely removing some of the reasons for using a stored procedure in the first place.  I would say dynamic sql is the WTF here.   So Alex are you saying we should trade run time performance for little to no increase in developer maintainability?   I would argue the style the original programmer was using would result in the best runtime performance.

     

  • (cs) in reply to Adam Machanic
    Anonymous:
    Non-DBA,

    Doing that will force a table scan.  This will cause a decrease in performance, not an increase.  Google "sargability" for more information. 

    It's funny that so many developers think that dynamic SQL will cause performance problems; I often use it to fix them.



    Great point.  Ironically, when it comes to performance, dynamic SQL is probably the most efficent if you need to build a complicated condition based on a wide variety of parameter values.  Flexbile reporting apps are great candidates for dynamic SQL.

    Another misunderstood thing about dynamic SQL is people think that using it automatically means that you are concatentating user input into your SQL strings and are open to injection or other issues. As the original post points out, the spec is to use sp_executesql, allowing you can use parameterized sql statements.  This is, of course, one of the subtle points missed by the creator of the WTF as he merrily concatenates the parameters into the strings ....

  • Adam Machanic (unregistered) in reply to e.thermal
    e.thermal:

    By using executesql your sql code in the stored procedure is never compiled and gets a new query plan everytime it is executed, completely removing some of the reasons for using a stored procedure in the first place.  I would say dynamic sql is the WTF here.   So Alex are you saying we should trade run time performance for little to no increase in developer maintainability?   I would argue the style the original programmer was using would result in the best runtime performance.




    Untrue.  sp_executesql reuses query plans for the same query strings as long as they use the same named parameters.  You can verify this using a Profiler trace on the SP:CacheHit event.

    As for maintainability, it's quite clear to me that maintaining one query, even with permutations, is far superior to maintaining 20.


  • Vertex (unregistered) in reply to Vector
    Vector:
    First Post!


    *sigh*

    Some people just never learn...
  • (cs)

    o my god

    you just suggested to fix a wtf (the extreme branching) with another wtf (dynamic sql)...
    and you're even proud enough of it to post it here....
    and then its posted...

    I give up, I'll be a janitor or something from now on

  • (cs) in reply to Adam Machanic
    Anonymous:
    e.thermal:

    By using executesql your sql code in the stored procedure is never compiled and gets a new query plan everytime it is executed, completely removing some of the reasons for using a stored procedure in the first place.  I would say dynamic sql is the WTF here.   So Alex are you saying we should trade run time performance for little to no increase in developer maintainability?   I would argue the style the original programmer was using would result in the best runtime performance.




    Untrue.  sp_executesql reuses query plans for the same query strings as long as they use the same named parameters.  You can verify this using a Profiler trace on the SP:CacheHit event.

    As for maintainability, it's quite clear to me that maintaining one query, even with permutations, is far superior to maintaining 20.


     

    If you notice the code in the original post that was suggested to be better did not use named parameters and would force a query plan regen on each call.  You're right if you use sp_executesql correctly you can re-use plans but compiled stored procedures can still offer better performance.

  • Adam Machanic (unregistered) in reply to t-bone
    t-bone:

    you just suggested to fix a wtf (the extreme branching) with another wtf (dynamic sql)...


    And why is dynamic SQL a WTF?


  • (cs) in reply to Adam Machanic
    • because it's often the very reason for the performance issues I get to fix, as noted above it's not always an issue, but suggesting it to a newbie/crap programmer like that will surely result in crap

      - because it often indicates that the code calling te proc needs more reviewing than the query

      - because it leads to huge and imo unreadable scripts, not any more readable than the original, especially since most sproc debuggers suck, and you have to resort to print @sqlstring to debug it

      - because it's more prone to sql injection most of the time, and when you write the sproc you wouldn't imagine that it'll be used from a web page one day

      etc etc


  • (cs) in reply to Jason
    Jason:
    I'm not sure how dynamic code makes this more maintanable. ... Besides, if you hard code the SQL statement in a string, how is that dynamic?

    It looks like the real problem here is that the guy didn't know how to use CASE effectively.

    I'm no fan of Dynamic SQL myself, but consider the case where you will need to return results from different tables. I've done this on a few occasions when I didn't want to duplicate some of the more complicated logic in views:

    IF @SomeParam = 'someval' @TableName = 'Orders_Not_Authorized'
    ELSE @TableName = 'Orders_Not_Approved'

  • (cs) in reply to e.thermal
    e.thermal:
    Anonymous:
    e.thermal:

    By using executesql your sql code in the stored procedure is never compiled and gets a new query plan everytime it is executed, completely removing some of the reasons for using a stored procedure in the first place.  I would say dynamic sql is the WTF here.   So Alex are you saying we should trade run time performance for little to no increase in developer maintainability?   I would argue the style the original programmer was using would result in the best runtime performance.




    Untrue.  sp_executesql reuses query plans for the same query strings as long as they use the same named parameters.  You can verify this using a Profiler trace on the SP:CacheHit event.

    As for maintainability, it's quite clear to me that maintaining one query, even with permutations, is far superior to maintaining 20.


     

    If you notice the code in the original post that was suggested to be better did not use named parameters and would force a query plan regen on each call.  You're right if you use sp_executesql correctly you can re-use plans but compiled stored procedures can still offer better performance.

    i stand corrected, Microsoft has changed their guidance on this, check this link http://support.microsoft.com/default.aspx?scid=kb;en-us;243586   There is a situation where recompile is forced during exectuion and if that happens all gains of the stored procedure are lost to the recompile.

     

     

  • (cs) in reply to t-bone
    t-bone:

    - because it's more prone to sql injection most of the time, and when you write the sproc you wouldn't imagine that it'll be used from a web page one day



    See what I mean about people thinking that dynamic SQL  = concatenting user input into strings?

    t-bone -- you can use parameters with dynamic SQL.  In fact, that's the *only* way you use dynamic SQL.


  • (cs) in reply to t-bone

    t-bone:
    o my god

    you just suggested to fix a wtf (the extreme branching) with another wtf (dynamic sql)...
    and you're even proud enough of it to post it here....
    and then its posted...

    I give up, I'll be a janitor or something from now on

    It's just levels upon levels...

  • (cs) in reply to Jeff S


    Jeff S:
    t-bone:

    - because it's more prone to sql injection most of the time, and when you write the sproc you wouldn't imagine that it'll be used from a web page one day



    See what I mean about people thinking that dynamic SQL  = concatenting user input into strings?

    t-bone -- you can use parameters with dynamic SQL.  In fact, that's the only way you use dynamic SQL.




    oops -- I mean the only way you should use dynamic sql.   Big difference, sorry about that.

  • (cs) in reply to Jeff S
    Jeff S:

    Jeff S:
    t-bone:

    - because it's more prone to sql injection most of the time, and when you write the sproc you wouldn't imagine that it'll be used from a web page one day



    See what I mean about people thinking that dynamic SQL  = concatenting user input into strings?

    t-bone -- you can use parameters with dynamic SQL.  In fact, that's the *only* way you use dynamic SQL.




    oops -- I mean the only way you *should* use dynamic sql.   Big difference, sorry about that.


    Maybe, but my guess is that when you have to resort to something like that, your app design is likely to be faulty
    As I stated, I agree there are uses for dynamic sql in the hands of someone who knows what he's doing can be ok, but suggesting it to someone who has proven to have no clue, is a serious wtf
  • Adam Machanic (unregistered) in reply to Jeff S
    Jeff S:


    See what I mean about people thinking that dynamic SQL  = concatenting user input into strings?

    t-bone -- you can use parameters with dynamic SQL.  In fact, that's the *only* way you use dynamic SQL.





    It's the only way I use dynamic SQL :)
  • (cs) in reply to Alex Papadimoulis
    Alex Papadimoulis:
    Jason:
    I'm not sure how dynamic code makes this more maintanable. ... Besides, if you hard code the SQL statement in a string, how is that dynamic?

    It looks like the real problem here is that the guy didn't know how to use CASE effectively.

    I'm no fan of Dynamic SQL myself, but consider the case where you will need to return results from different tables. I've done this on a few occasions when I didn't want to duplicate some of the more complicated logic in views:

    IF @SomeParam = 'someval' @TableName = 'Orders_Not_Authorized'
    ELSE @TableName = 'Orders_Not_Approved'



    Again, the difference is between someone who understands the power and pitfalls, and someone who doesn't even know what Dynamic SQL is. Dynamic SQL quality goes downhill very quickly if the person writing it isn't an experienced programmer who has a sense of proportion or readable style.

    I was handed about a hundred internal-use perl scripts to maintain at a previous job. All of them uses dynamic sql generation to the level of dynamically determining 'and' conditions in a where clause, or even '+' or '-' in expressions -- all predicated directly on web input. Talk about a nightmare!
  • (cs) in reply to Fullstop
    Anonymous:
    I am not familiar with SQL server, but in Oracle we tend to do such things as this:

    (I'm using the SQL server syntax)
    <font>SELECT * FROM</font> [Archive_Letters]
    <font>WHERE</font> (@Archive_Desc is null or [Archive_Desc] = @Archive_Desc)<font></font>

    This way, you just short circuit the where clause if the parameter is null.  If it is not null, it gets pulled into the query.

    It can make for one big ugly statement, though.  I am not sure which is worse, one big ugly statement, or 5 small ugly statements.  :-)




    Actually, you'd want the WHERE clause to look like this...

    WHERE (@Archive_Desc is null or (@Archive_Desc is not null AND [Archive_Desc] = @Archive_Desc))

    Otherwise (in MS SQL Server at least) the SELECT will evaluate all the conditions of the OR (I've tested this both ways in case anyone is curious).
  • (cs) in reply to WTF Batman
    WTF Batman:


    Again, the difference is between someone who understands the power and pitfalls, and someone who doesn't even know what Dynamic SQL is. Dynamic SQL quality goes downhill very quickly if the person writing it isn't an experienced programmer who has a sense of proportion or readable style.


    Exactly, if the original programmer has no clue, and probably the same can be expected from the maintainers of the code, in a few months we can see the dynamic sql version posted here by George, the successor to Adam
  • Adam Machanic (unregistered) in reply to t-bone
    t-bone:

    Maybe, but my guess is that when you have to resort to something like that, your app design is likely to be faulty


    Really?  Let's delve into this a bit.

    Assume you've created an application in which the user can search on any combination of 20 different fields.  How would you design your application to search the database on the criteria in such a way that it would not only be as maintainable as possible, but would also perform well even with hundreds of millions of rows and thousands of concurrent users? 
  • (cs) in reply to Adam Machanic
    Anonymous:
    t-bone:

    Maybe, but my guess is that when you have to resort to something like that, your app design is likely to be faulty


    Really?  Let's delve into this a bit.

    Assume you've created an application in which the user can search on any combination of 20 different fields.  How would you design your application to search the database on the criteria in such a way that it would not only be as maintainable as possible, but would also perform well even with hundreds of millions of rows and thousands of concurrent users? 


    And you expect a dumbie like the one you quoted try and solve that with dynamic sql, I hope he doesn't ask you to fix it
  • (cs) in reply to t-bone
    t-bone:
    - because it's often the very reason for the performance issues I get to fix, as noted above it's not always an issue, but suggesting it to a newbie/crap programmer like that will surely result in crap

    - because it often indicates that the code calling te proc needs more reviewing than the query

    - because it leads to huge and imo unreadable scripts, not any more readable than the original, especially since most sproc debuggers suck, and you have to resort to print @sqlstring to debug it

    - because it's more prone to sql injection most of the time, and when you write the sproc you wouldn't imagine that it'll be used from a web page one day

    etc etc




    Point 1:  Just because an inexperienced coder will screw something up doesn't mean that using something is bad.  If we all thought that, then no one would use threading.

    Point 2: No it doesn't, it usually means that there's a complicated search that needs to be done with multiple parameters, of which any or none could be null, or the columns they are searching on could be nullable as well.  The alternative is table or index scans with COALESCE.

    Point 3:  It's a hell of a lot more readable than 2^x conditionals in a search procedure.

    Point 4:  Using sp_executesql correctly will make the dynamic query just as safe with regards to injection as any regular stored procedure.
  • (cs) in reply to t-bone
    t-bone:
    Anonymous:
    t-bone:

    Maybe, but my guess is that when you have to resort to something like that, your app design is likely to be faulty


    Really?  Let's delve into this a bit.

    Assume you've created an application in which the user can search on any combination of 20 different fields.  How would you design your application to search the database on the criteria in such a way that it would not only be as maintainable as possible, but would also perform well even with hundreds of millions of rows and thousands of concurrent users? 


    And you expect a dumbie like the one you quoted try and solve that with dynamic sql, I hope he doesn't ask you to fix it


    and what's wrong with case and if when  inline?
  • Adam Machanic (unregistered) in reply to Sean
    Sean:

    Actually, you'd want the WHERE clause to look like this...

    WHERE (@Archive_Desc is null or (@Archive_Desc is not null AND [Archive_Desc] = @Archive_Desc))

    Otherwise (in MS SQL Server at least) the SELECT will evaluate all the conditions of the OR (I've tested this both ways in case anyone is curious).


    Are you SURE of that?


    ---
    use tempdb
    go

    create table xyz (searchit int not null primary key)
    go

    insert xyz (searchit)
    select number
    from master..spt_values
    where type = 'p'
    go

    set statistics profile on
    go

    declare @mysearch int
    set @mysearch = 100

    select *
    from xyz
    WHERE
    (@mysearch is null or (@mysearch is not null AND [searchit] = @mysearch))
    go

    set statistics profile off
    go

    drop table xyz
    go
    ---
  • (cs)

    Dynamic Sql has the advantage of being able to take into account the current structure of all tables and indexes as well as making better use of the augogenerated statistic sets created by the query optimizer.  Ther can also be advantages in creating and executing dynamic SQL from your stored procs in some cases.  You can also use dynamic SQL to replace variables with constant expressions to all the optimizer to have a clearer picture of the data being accessed by the query.  This can result in faster execution time because the optimizer is able to accurately assess the data distribution when creating the execution plan.

     

    However in this situation the above advantages don't clearly show themselves.

  • Jeremy (unregistered) in reply to Fullstop
    Anonymous:
    I am not familiar with SQL server, but in Oracle we tend to do such things as this:

    (I'm using the SQL server syntax)
    <FONT size=+0>SELECT * FROM</FONT> [Archive_Letters]
    <FONT size=+0>WHERE</FONT> (@Archive_Desc is null or [Archive_Desc] = @Archive_Desc)<FONT size=+0></FONT>


    This way, you just short circuit the where clause if the parameter is null.  If it is not null, it gets pulled into the query.

    It can make for one big ugly statement, though.  I am not sure which is worse, one big ugly statement, or 5 small ugly statements.  :-)


     

    Arghhh... I wish people used ISNULL or COALESCE

    SELECT *
    FROM [Archive_Letters]
    WHERE ISNULL(@Archive_Desc, [Archive_Desc]) = [Archive_Desc]

     

    Nice and simple and easy... I'd have to say that the Dynamic SQL is a WTF in and of itself...

  • (cs) in reply to Jeremy
    Anonymous:
    Anonymous:
    I am not familiar with SQL server, but in Oracle we tend to do such things as this:

    (I'm using the SQL server syntax)
    <font size="-0">SELECT * FROM</font> [Archive_Letters]
    <font size="-0">WHERE</font> (@Archive_Desc is null or [Archive_Desc] = @Archive_Desc)<font size="-0"></font>


    This way, you just short circuit the where clause if the parameter is null.  If it is not null, it gets pulled into the query.

    It can make for one big ugly statement, though.  I am not sure which is worse, one big ugly statement, or 5 small ugly statements.  :-)


     

    Arghhh... I wish people used ISNULL or COALESCE

    SELECT *
    FROM [Archive_Letters]
    WHERE ISNULL(@Archive_Desc, [Archive_Desc]) = [Archive_Desc]

     

    Nice and simple and easy... I'd have to say that the Dynamic SQL is a WTF in and of itself...



    Yeah! Who needs use those silly indexes, right Jeremy? 
  • Adam Machanic (unregistered) in reply to Jeremy
    Anonymous:

    Arghhh... I wish people used ISNULL or COALESCE

    SELECT *
    FROM [Archive_Letters]
    WHERE ISNULL(@Archive_Desc, [Archive_Desc]) = [Archive_Desc]

     

    Nice and simple and easy... I'd have to say that the Dynamic SQL is a WTF in and of itself...




      Yes, agreed... very simple and easy!

       ... Too bad such practices will drive performance into the ground.  Table scans are not a good thing.


    ---
    use tempdb
    go

    create table xyz (searchit int not null primary key)
    go

    insert xyz (searchit)
    select number
    from master..spt_values
    where type = 'p'
    go

    set statistics profile on
    go

    declare @mysearch int
    set @mysearch = 100

    select *
    from xyz
    WHERE
    searchit = COALESCE(@mysearch, searchit)
    go

    set statistics profile off
    go

    drop table xyz
    go
    ---

  • Runtime Error (unregistered) in reply to Adam Machanic

    The obvious answer is that you need an XML based dynamic query engine in front of this.  That will fix all of your problems.  The good news is that there is a company on this very website who could provide that technology

  • (cs) in reply to Adam Machanic
    Anonymous:

    insert xyz (searchit)
    select number
    from master..spt_values
    where type = 'p'
    go


    Not a terribly good example for benchmarking... On my SQL Server I only have 255 rows in xyz. It's my understanding that for tables with a small number of rows, the optimizer figures it's quicker to do a table scan then take any time to figure out which index to use, so it will just do a table scan by default. Try creating a table with a million rows and you might have a good test case.

     

  • (cs)
    Alex Papadimoulis:

    ... Adam asked the developer to use dynamic SQL. The developer was unfamiliar with this, so Adam explained that he simply needed to build a string that contained the SELECT statement, and then use sp_executesql to execute that statement. This way, Adam explained, all the common parts (the SELECT and FROM clauses) could be kept common and all the selection logic kept in the branching logic.


    I wonder what the other developer was thinking, when he implemented these changes: "That Adam guy is such a retard... this is even worse than before!"

  • database guy (unregistered)

    I work with people (plus legacy code) who are terrified of using a variable, let alone devising a data structure.

  • Chris Burgess (unregistered)

    <FONT face="Courier New">EXEC sp_executesql @sql</FONT>  executes the sql in an entirely different thread of execution. It can be really useful for custom dynamic reporting, but stick it into a sproc and you now have a nightmare for any DTS package that wants to use that data.

Leave a comment on “Let's Go Dynamic”

Log In or post as a guest

Replying to comment #:

« Return to Article