• Runtime Error (unregistered) in reply to VGR
    VGR:

    Then we must see things differently.  What's so horrendous?  Write up to four very short interfaces to go with your EJB class.  Add stuff to a simple XML file with a well-documented structure.

    Learning it was hard, but using it was easy.  An EJB 2.0 compliant entity bean with a dozen persisted properties took me fifteen minutes to write.  By hand, without any IDE (unless you count gvim and ant).

    Of course, I did smart things like using container-managed persistence, because I know the internals of a vendor's JDBC driver are more efficient than anything I can write.

    Now, if you tried to muddle your way through it instead of reading the EJB 2.0 specification, then I imagine it would have been a lot tougher.


    Oh I don't know, how about no interfaces and I just persist my POJO straight to the database.  Every other ORM framework seemed to pick that route.  Every alternative ORM framework to 2.0 entity beans is better. It seems like they all started at the same design theme, "What would entity beans be like if they didn't suck and blow".

    And no, having the IDE shit out the extra interfaces for you is not the answer.  Thats the microsoft way and its a one stop WTF generator.

    EJBs are so bad that they make some people think that RoR is a good idea.

    This time I at least give the EJB design commitee credit for realizing that other people have already solved this problem in a much more elegant manner and maybe they should ask them to work on the spec.  EJB 3.0 looks like its a massive improvement.  I don't see any reason to use it over something like Hibernate.  But at least I wouldn't start gagging if I was mandated to use it.
  • (cs) in reply to [Si]dragon

    Anonymous:
    Alex Papadimoulis:
    Though they take a bit more time to develop upfront, using database stored procedures are definitely the way to go for most information systems. They provide a looser coupling to the "data layer" by limiting the points of entry and offer stronger cohesion by breaking operations into reusable functionality.


    Considerably more time upfront and considerably more time in maintenance.  Every time a programmer has to tackle software using stored procedures, calamity ensues.  And for what benefit?

    Encapsulation.

    I have yet to see the case where SPs increase reliability or reduce bugs.

    Your ignorance does not an argument make.  SPs, by creating a well-defined protocol for interacting with data, are a boon in any system where more than one application has to work with the database.  Rather than requiring each application to do the Right Thing (usually using queries written from scratch), the stored procedures ensure the work only has to be done once.  They also provide an abstraction, decoupling client programs from the vagaries of the database schema; data may be partitioned or denormalized behind the scenes (to accommodate scalability issues, for example) but the SP isolates applications from these concerns.

    Sticking to these simple principles is a Good Idea.

    But using SPs does not undermine these principles.  They merely ensure that, say, creating various pieces of data (a customer, say, or an order, or a transaction, or whatever else) is done in a uniform and correct way.

    For example, consider the plight of your typically hapless developer.  There is no reason that he or she should have to also be an expert in whatever database you're using, let alone anything.

    Which is precisely why he should not be required to write queries, and instead should use a high-level front-end to the database--stored procedures.

    Period.  That means using services that do the mapping between business objects and persistence.  That work is best handled by someone who is a database expert.

    All the more reason to have a "database expert" generate SPs and leave object manipulation to the programmers.

  • (cs) in reply to Disgruntled DBA
    Disgruntled DBA:
    Anonymous:
    Alex Papadimoulis:

    Though they take a bit more time to develop upfront, using database stored procedures are definitely the way to go for most information systems. They provide a looser coupling to the "data layer" by limiting the points of entry and offer stronger cohesion by breaking operations into reusable functionality.



    I have to ask; why do people think stored procedures are the way to go? I know most people on this site are .net oriented, and I know MS for the longest time told people to use them (they have since gotten away from that, but hey, it is 2006), but what architectural reason could one use to justify this travesty?

    Don't get me wrong, SPs have their place, but to use them for all your dataaccess and business logic is just plain nuts. If you are using VS.NET 2k5, look at DataSets or nHibernate. If you are using Java, take a gander at Hibernate (or EJB 3....though I still am leary of EJB) and, of course, there is Active Record for Ruby.


    A stored procedure's execution plan (or any other query's execution plan for that matter) is stored in the Procedure Cache in SQL Server's memory.  That's Library Cache for any Oracle folks.  When you call a stored procedure several times, the first call gets to generate the plan, and the next bunch of executions do not need to go through all the hoops and hassles of generating a new plan for each run (which is what dynamic or generated SQL statements will just about always do).  Because of this, using stored procedures (and by that I mean using them correctly) can result in a 33% boost in performance.  Not using them will result in your procedure cache getting cluttered with items like

    select * from customer where customer_id = '816A4E82-615C-4BD7-9293-D92ED8BAE0ED'


    Welcome to 1997.

    Ever since SQL 7, ad-hoc statements have run as fast as stored procedures.  If these were run on a SQL server:

    select * from customer where customer_id = '816A4E82-615C-4BD7-9293-D92ED8BAE0ED'
    select * from customer where customer_id = '547E1A86-812F-4635-81DA-56DDF997EAA0'
    select * from customer where customer_id = 'CAD71FCF-1BC7-43b6-A3EA-6044DD383CE4'
    select * from customer where customer_id = 'E6FA1DE6-BDA1-4d85-AC1A-A4E5E3975870'

    They would get auto-parameterized and run exactly as if a stored procedure has been created and executed like this:

    CREATE PROC xyz (@p1 uniqueidentifier) AS select * from customer where customer_id = @P1
    GO
    EXEC xyz '816A4E82-615C-4BD7-9293-D92ED8BAE0ED'
    EXEC xyz '547E1A86-812F-4635-81DA-56DDF997EAA0'
    EXEC xyz 'CAD71FCF-1BC7-43b6-A3EA-6044DD383CE4'
    EXEC xyz 'E6FA1DE6-BDA1-4d85-AC1A-A4E5E3975870'

    There certainly are reasons to use SPs.  However, performance is not one of them.

    On a similar note, the best reasons to use SPs have already been stated; to define a data access layer, to promote reusability, and a billion others.  The interesting part is that there are other technologies that can do the same thing, namely object oriented programming.  The only real advantage I see to formal SPs is that they can be used in reporting tools which often cannot use a data access layer outside of the database (try to set the data source of a Crystal Report to the result of a C# method).  However, I would much rather see this code implemented as User Defined Functions or Views.  Both functions and views can be combined seamlessly into the rich set oriented evironment of an RDBMS.  SPs look like they don't belong.  Look at how ugly it is to have a procedure call another procedure and get back a result set.  Then look at how nicely it works with views and functions.

    Another downside of SPs is distribution.  New code expects new procedures to support it.  Having half an application deployed in the database and the other half on the webserver is something to be concerned about.  Also, having two applications rely on the same set of procedures is tons of fun.  I know a lot of you will say that this is actually a good thing, but it's not.  The theory sometimes goes "Hey, I'll only have to update one application and the other will automatically get the benefits".  Well, that model was used for 20 years in windows and now they call it "dll hell" and the world has moved on to private assemblies and strongly versioned assemblies.

    I have nothing against stored procedures, but I also don't see them as better than many other alternatives (I also don't see them as significantly worse).  But, I cringe whenever someone mentions the benefits of SPs and gets all of them wrong.  Saying SPs provide a data access layer is like trying to sell them a $50,000 Chevy Cavalier and using "it will get you to work" as a sales pitch.  Sure, you're not lying, but you can do a lot better for your money.  To complete the analogy, there are far better ways to make a data access layer than stored procedures.

  • (cs) in reply to John Smallberries
    John Smallberries:
    Anonymous:
    Alex Papadimoulis:
    They provide a looser coupling to the "data layer" by limiting the points of entry

    I always thought they make for more coupling, because have to recreate sp's when you want to use a different database. For instance, in MS SQL Server you write @var, in Oracle you write :var.

    Alex means looser coupling between the app logic and the database.

    Obviously the SP is rather tightly bound to the DBMS. See the last point in my previous post.


    Enough with the coupling!  There may be children watching.
  • (cs) in reply to jsmith
    jsmith:
    On a similar note, the best reasons to use SPs have already been stated; to define a data access layer, to promote reusability, and a billion others.  The interesting part is that there are other technologies that can do the same thing, namely object oriented programming.

    Not to be obtuse, but how? I'm a strong advocate of OO techniques, but I have yet to find an agreeable solution where data access code isn't sprinkled throughout a plethora of classes.

    And talk about ugly: SQL written inside another language can be downright obfuscated with all the string concatenation and quotes and escape sequences. Passing params to a proc seems 'natural'.

    jsmith:
    However, I would much rather see this code implemented as User Defined Functions or Views.

    Sounds great for reading data, but what about inserting/updating it? Updatable views? *shiver*

    jsmith:
    Having half an application deployed in the database and the other half on the webserver is something to be concerned about.

    Why? This is why distributed applications scale so well. And to be accurate, I think the amount of code on the DB is much less than 50%.

  • (cs) in reply to DrPizza
    DrPizza:

    <SNIP>a bunch of completely factual, but yet not quite useful information</SNIP>

    DrPizza,

    Yes, stored procedures provide encapsulation, a well-defined protocol for accessing the database, partitioning, denormalizing, abstraction, decoupling, and scalability.  However so does:

    C#
    Java
    VB
    ... and 7 thousand other OO languages.

    And of all of them, Transact-SQL is the hardest to write, maintain, and debug(except maybe malbolge).  I'm good at T-SQL, but I can still do anything in C# in half the time I can do it in T-SQL.  Decoupling example:  A working application needs a modification.  The security team decides that the current reversable encryption scheme in place for storing account password isn't good enough.  They now want a one-way hash with a salt.  Let's look at how our life would be in two situations:

    A.  You wrote all the DB access routines as SPs.  There are SPs for setting a password, and for validating a password.  Cool, now all you have to do is change the content of two SPs and run an update script on the current data (the current pw is reversible).

    B.  You wrote all the DB access routines in C#.  There are methods on the User class for setting a password and for validating a password.  Cool, now all you have to do is change the content of two methods and run an update method on the current data (the current pw is reversible).

    I can fix "B" in 5 minutes and give the security team their pick of encryption algorithms.  They pick SHA512, a well-trusted and robust hash algorithm and we have access to a tested, proven implementation in the .Net framework.  If it didn't, we could buy a third party component for a few hundred dollars.

    "A" would be interesting to implement.  We could do one of several things;  1.  Roll our own hash algorithm (very bad idea), 2.  Write an extended stored procedure in C (That'll be fun, and look at the nifty distribution issues we've just created),  3.  Move the logic back to the application (hey, where did our loose coupling go?).  If anyone has a better solution, please speak up.

  • (cs) in reply to John Smallberries
    John Smallberries:
    jsmith:
    On a similar note, the best reasons to use SPs have already been stated; to define a data access layer, to promote reusability, and a billion others.  The interesting part is that there are other technologies that can do the same thing, namely object oriented programming.

    Not to be obtuse, but how? I'm a strong advocate of OO techniques, but I have yet to find an agreeable solution where data access code isn't sprinkled throughout a plethora of classes.

    And talk about ugly: SQL written inside another language can be downright obfuscated with all the string concatenation and quotes and escape sequences. Passing params to a proc seems 'natural'.

    jsmith:
    However, I would much rather see this code implemented as User Defined Functions or Views.

    Sounds great for reading data, but what about inserting/updating it? Updatable views? *shiver*

    jsmith:
    Having half an application deployed in the database and the other half on the webserver is something to be concerned about.

    Why? This is why distributed applications scale so well. And to be accurate, I think the amount of code on the DB is much less than 50%.


    The ugly part is easy to fix.  I store my data access scripts in XML format using a tool I wrote.  The tool is a TreeView based organizer for data access code that generates C# or VB when I push a button.  To modify a script, I just modify the source and regenerate.  I can write arbitrary SQL an turn it into a method (just like a stored procedure).  I even have an implementation ComboBox so I can implement it as a stored procedure if I choose to do so.  The tool just spits out the CREATE PROC script.  Passing params to methods seems far more natural than passing them to procs.  You can't tell me than you figured out OUTPUT the first time you saw it.

    Updatable views can be handy every once in a while.  I agree that they are a lot of work to get right.  Most of my modification "units" are just good ole fashioned blocks of SQL.

    The last statement deserves a closer look.  Just because my T-SQL is generated in the middle tier, doesn't mean it runs there.  You can think of middle tier generated SQL as just a big procedure name.  It goes to the server and runs there, providing exactly the same scalability benefits as stored procedures.  Even better, you can upgrade middle tier cluster nodes one at a time.  With SPs, you have to be really careful to create SPs so that any modifications work both with the previous middle tier and with the new middle tier.  Sometimes you end up with two sets of stored procedures -- yuck.

  • Chris (unregistered) in reply to jchart

    I have another vb solution for the padded "0":

    for i = 1 to 50<font face="Courier New">
       param=sqlCmd.createParameter("int_" _
       & right("00" & i,2) _
       & "_in",adInteger,adParamInput, 8 Null)</font><font face="Courier New">
       sqlCmd.Parameters.Append Param
    next

    Enjoy.
    </font>

  • (cs) in reply to jsmith
    jsmith:

    The ugly part is easy to fix.  I store my data access scripts in XML format using a tool I wrote.  The tool is a TreeView based organizer for data access code that generates C# or VB when I push a button.  To modify a script, I just modify the source and regenerate.  I can write arbitrary SQL an turn it into a method (just like a stored procedure).  I even have an implementation ComboBox so I can implement it as a stored procedure if I choose to do so.  The tool just spits out the CREATE PROC script.  Passing params to methods seems far more natural than passing them to procs.  You can't tell me than you figured out OUTPUT the first time you saw it.

    I have a similar tool...but it gens proc scripts. And yes, OUTPUT is the devil's spawn; I endeavor to use only resultsets.

    jsmith:

    The last statement deserves a closer look.  Just because my T-SQL is generated in the middle tier, doesn't mean it runs there.  You can think of middle tier generated SQL as just a big procedure name.  It goes to the server and runs there, providing exactly the same scalability benefits as stored procedures.  Even better, you can upgrade middle tier cluster nodes one at a time.  With SPs, you have to be really careful to create SPs so that any modifications work both with the previous middle tier and with the new middle tier.  Sometimes you end up with two sets of stored procedures -- yuck.


    Your update point is very well taken. That is a serious pain in the ass with SPs.
  • (cs) in reply to Volmarias
    Volmarias:
    The sad thing is that he did

    param = sqlCmd.CreateParameter("int_01_in", adInteger, adParamInput, 8, Null) sqlCmd.Parameters.Append param

    param = sqlCmd.CreateParameter("int_02_in", adInteger, adParamInput, 8, Null) sqlCmd.Parameters.Append param

    ...

    When he should have done

    Dim val as String for(int i = 0; i < 50; i++) { val = i; if(i < 10)val = " " + val; param = sqlCmd.createParameter("int_" & val & "_in",adInteger, adParamInput, 8, Null) sqlCmd.Parameters.Append param }

    (mind, my VB is a bit rusty, so this is probably not quite well written. But then, VB is a WTF in and of itself)

    Ah, code reuse. What a joyous concept!

    You are assuming this was originally done in VB. Alex has a habit of changing odd in-house languages to something people are more likely to know (normally VB for some reason)

    Since I'm right now using one of those in-house languages, I can fully sympathize with this. Sub-routine? We don't need to stinking subrouting. Why would you want a stack isn't the 20 global variables we give you enough?

    I just spend today implementing A simple X &= 0xC0 as a long if-goto construct (not if-else, that would be too easy). Then copy/paste it 4 more times, with only slight changes.

    And to think I told the interviewer that I thought scripting languages were fun.

  • Someone Who Still codes VB (unregistered) in reply to makomk
    makomk:
    Incidentally, correct VB would be:

    Dim val as String, i As Integer
    For i = 1 to 50
      val = i
      If i < 10 Then val = "0" + val
      param = sqlCmd.createParameter("int_" + val + "_in",adInteger, adParamInput, 8, Null)
      sqlCmd.Parameters.Append param
    Next


    My VB is also slightly rusty, but I'm pretty sure that's right...


    Except it is good form to say "Next i" which makes loop boundaries in nested loops a little clearer, although that doesn't work in ASP...:(

  • (cs)

    Re the "why use SPs" debate; I'm not going to get into the performance discussion - instead:

    I work largely with brown-field, organically growing systems, e.g. where in a years time we need another sub-system bolting on, which re-uses the same e.g. customer model. Over this time technology also changes; language, architecture, etc. Using an SP abstracts this from the database; if I want to search for a customer, I call CustomerSearch (SP with params) - via whatever language I want. Given that this may be a complex SP involving dynamic/static SQL for different conditions, FTI if available / appropriate, etc, it would be a beast to maintain (coded directly) in various places (e.g. a .Net version, a VB6 version, etc; client caller? server caller?) - plus I would need to adjust each caller to implement a change. Just look at how many stabs it took to get the " " / "0" thing right above! It also allows more granular security; I can give a caller access to query the SP, but not the tables directly.

    Each way has pros / cons. For me, using SPs is a clear "win"...

  • useless user (unregistered) in reply to codeman
    codeman:
    loneprogrammer:
    Anonymous:

    I once had to write a proc to implement a query that had 38 user-specifiable fields. Naturally, I wanted to optimize the thing since only one or two would be used on most occasions. I wound up building a table of queryable fields and matching table and where-clause terms to be included for each search term, and then  checking each argument in turn: if not null, then include the relevant tables and where-clause in the query-string. Then normalize it to eliminate duplicates, and execute whatever it generated. Project rules required all db-logic to be in stored procs, so I had a proc generating the sql it needed and then dynaimcally executing it.


    Maybe I'm stupid but . . . there's got be a simpler way?  Your DBMS already has a query optimizer, do you need to do its job for it?

    There WAS an optimizer, but the requirements forced checking non-indexed columns for most of the fields, which forced the DBMS to do a table scan (I inherited the schema and was not allowed to change it). Show-plan detailed a series of nested table-scan loops that made you choke. The only way to prevent the multiply-nested table scans was to not reference the unused columns (this way, only the users who referenced the data paid the price). My boss recognized the problem, but the data was used by so many other groups that we just couldn't reorganize it.

    BTW: if you used ALL of the parameters (nothing was null), the (Oracle) server on what was then, a fairly high-powered unix box) returned rows at the mind-numbing speed of approximately 3 rows per SECOND

    To be fair to Oracle, the query was totally unreasonable, and it did grind through the work without any errors
     


    I don't understand what is so hard to grasp about using dynamic SQL statements. Sure they are not compiled, but after all the hoops and crazy joins it sounds like you have to jump through, it just sounds easier to do a direct statement.

    Or does concatenating a string seems to non-|33+ for you?
  • (cs) in reply to [Si]dragon

    Anonymous:
    Alex Papadimoulis:
    Though they take a bit more time to develop upfront, using database stored procedures are definitely the way to go for most information systems. They provide a looser coupling to the "data layer" by limiting the points of entry and offer stronger cohesion by breaking operations into reusable functionality.


    Considerably more time upfront and considerably more time in maintenance.  Every time a programmer has to tackle software using stored procedures, calamity ensues.  And for what benefit?  I have yet to see the case where SPs increase reliability or reduce bugs.  Maybe improve performance.  Unfortunately, developer time is a lot more expensive than the value of the time saved.

    Violating patterns by spreading business logic across layers and adding more variables and complexity into the mix.  SPs are not a good idea.

    The database stores data.

    The business logic manipulates it.

    The view displays it.

    Sticking to these simple principles is a Good Idea.  For example, consider the plight of your typically hapless developer.  There is no reason that he or she should have to also be an expert in whatever database you're using, let alone anything.  Period.  That means using services that do the mapping between business objects and persistence.  That work is best handled by someone who is a database expert.

    I can come up with more reasons but there are plenty of materals on the subject which support this.  To get you started, Google on "Gang of Four", "Spring", "Hibernate", "Pragmatic Programmer", and so on.  Enjoy.

    Other people responded to this parochialism better than I probably will, but I feel compelled to make a few points:

    1. At some point in their careers, usually past the 10 year mark, programmers who survive graduate to enterprise-class applications, which generally involve large volumes of data. By "large volumes of data" I don't just mean large by the standards of UIs, but millions and billions of rows. You can't handle those outside the DBMS in the amount of time you will typically be given, and even if you had the time, where money is concerned nobody is going to care if your solution is in a patterns book or not. Whatever your religious views, you are going to end up using stored procedures. You'll end up becoming more tolerant, and you might even realize that there are other, equally valid, paradigms than OO.

    2. If you want to be taken seriously, learn the difference between "database" and "DBMS".

    3. The incompatibility of Java culture and DBMSes is mostly myth, IMHO caused by fear of the unknown and really misguided tools like Hibernate. Here is an example of a Java tool that bridges the gap: http://sourceforge.net/projects/amber-db.

     

  • MBV (unregistered) in reply to Good Architecture Man
    Anonymous:
    Alex Papadimoulis:

    Though they take a bit more time to develop upfront, using database stored procedures are definitely the way to go for most information systems. They provide a looser coupling to the "data layer" by limiting the points of entry and offer stronger cohesion by breaking operations into reusable functionality.



    I have to ask; why do people think stored procedures are the way to go? I know most people on this site are .net oriented, and I know MS for the longest time told people to use them (they have since gotten away from that, but hey, it is 2006), but what architectural reason could one use to justify this travesty?

    Don't get me wrong, SPs have their place, but to use them for all your dataaccess and business logic is just plain nuts. If you are using VS.NET 2k5, look at DataSets or nHibernate. If you are using Java, take a gander at Hibernate (or EJB 3....though I still am leary of EJB) and, of course, there is Active Record for Ruby.

    I know one good example: Imagine a large set of measured values from field sensors, and you want to have the temperature for the entire year of 12:00 of each day, while measurements are taken each 5 minutes. How to do that over a network, without using Stored Procedures?
    That's the only good point of a system which is entirely WTF. No specs, no testing, no Version Management, no design, but it was the best available for that purpose
  • Ed C (unregistered)

    That picture of Joel on the home page is really scary. Could someone please remove it!!!

  • (cs) in reply to Good Architecture Man
    Anonymous:

    I have to ask; why do people think stored procedures are the way to go? I know most people on this site are .net oriented, and I know MS for the longest time told people to use them (they have since gotten away from that, but hey, it is 2006), but what architectural reason could one use to justify this travesty?

    Don't get me wrong, SPs have their place, but to use them for all your dataaccess and business logic is just plain nuts. If you are using VS.NET 2k5, look at DataSets or nHibernate. If you are using Java, take a gander at Hibernate (or EJB 3....though I still am leary of EJB) and, of course, there is Active Record for Ruby.


    Using stored procedures for business logic is generally nuts. That belongs in the middle tier.

    Stored procedures are there to control data logic. Validate your data, maintain relationships, control access. The possible added performance is a bonus.
  • (cs) in reply to VGR
    VGR:
    Of course, I did smart things like using container-managed persistence, because I know the internals of a vendor's JDBC driver are more efficient than anything I can write.


    Are you serious?

    (my vendor's jdbc driver spends 25% of its runtime instantiating NumberFormatException objects in an inner loop)
  • RichNFamous (unregistered) in reply to rob_squared
    rob_squared:
    This must be the subtle way a DBA tells his boss that he's doing everything.

    Guess it didn't work.


    This is how Enterprise works...
  • John Doe (unregistered) in reply to RyuO
    RyuO:
    3. The incompatibility of Java culture and DBMSes is mostly myth, IMHO caused by fear of the unknown and really misguided tools like Hibernate. Here is an example of a Java tool that bridges the gap: http://sourceforge.net/projects/amber-db.


    Yes, that one was great. Thank you. The all-German comments were a nice touch. "Norbert" seems to like to see his own name; he mentions it 200 times in 1100 lines of code.

    He doesn't trust regular expressions that use complicated features like '|':

    <font size="1">   public static String getUrlLessPassword(String url) {
          if (url != null) {
             String pwd1 = "pwd";
             String pwd2 = "password";
             final Pattern p1 = Pattern.compile(pwd1 + "=.*?;", Pattern.CASE_INSENSITIVE);
             final Pattern p2 = Pattern.compile(pwd2 + "=.*?;", Pattern.CASE_INSENSITIVE);
             final Pattern p3 = Pattern.compile(pwd1 + "=.*?$", Pattern.CASE_INSENSITIVE);
             final Pattern p4 = Pattern.compile(pwd2 + "=.*?$", Pattern.CASE_INSENSITIVE);
             url = p1.matcher(url).replaceAll(pwd1 + "=*****;");
             url = p2.matcher(url).replaceAll(pwd1 + "=*****;");
             url = p3.matcher(url).replaceAll(pwd1 + "=*****");
             url = p4.matcher(url).replaceAll(pwd1 + "=*****");
          }
          return url;
       }</font>

    But he does like Vector-based programming:

    <font size="2">    public Vector<Vector> executeCreateVector( String queryString )

    </font>I especially like the half-generic type - it's a 1.5 vector of 1.4 vectors!
  • (cs) in reply to Good Architecture Man

    As an aside, I used to work at that company too, and some of their practices were frankly shocking.

    SPs were generally encourage during the late 90s/2000s dot com boom when there were new languages/versions coming out all the time. SPs at least provided some re-usability and ensured developers didnt have to re-write all data access code twice a year.

  • John Doe (unregistered) in reply to John Doe

    That was Vector<vector> before the forum software ate my lt-gt-s. Or Vector&lt;Vector&gt;, or Vector<vector>, depending on the exact amount of double-unescaping that will happen to this post. It was unescaped once when the captcha failed. Argh.

  • (cs) in reply to RyuO
    RyuO:

    Anonymous:
    Alex Papadimoulis:
    Though they take a bit more time to develop upfront, using database stored procedures are definitely the way to go for most information systems. They provide a looser coupling to the "data layer" by limiting the points of entry and offer stronger cohesion by breaking operations into reusable functionality.


    Considerably more time upfront and considerably more time in maintenance.  Every time a programmer has to tackle software using stored procedures, calamity ensues.  And for what benefit?  I have yet to see the case where SPs increase reliability or reduce bugs.  Maybe improve performance.  Unfortunately, developer time is a lot more expensive than the value of the time saved.

    Violating patterns by spreading business logic across layers and adding more variables and complexity into the mix.  SPs are not a good idea.

    The database stores data.

    The business logic manipulates it.

    The view displays it.

    Sticking to these simple principles is a Good Idea.  For example, consider the plight of your typically hapless developer.  There is no reason that he or she should have to also be an expert in whatever database you're using, let alone anything.  Period.  That means using services that do the mapping between business objects and persistence.  That work is best handled by someone who is a database expert.

    I can come up with more reasons but there are plenty of materals on the subject which support this.  To get you started, Google on "Gang of Four", "Spring", "Hibernate", "Pragmatic Programmer", and so on.  Enjoy.

    Other people responded to this parochialism better than I probably will, but I feel compelled to make a few points:

    1. At some point in their careers, usually past the 10 year mark, programmers who survive graduate to enterprise-class applications, which generally involve large volumes of data. By "large volumes of data" I don't just mean large by the standards of UIs, but millions and billions of rows. You can't handle those outside the DBMS in the amount of time you will typically be given, and even if you had the time, where money is concerned nobody is going to care if your solution is in a patterns book or not. Whatever your religious views, you are going to end up using stored procedures. You'll end up becoming more tolerant, and you might even realize that there are other, equally valid, paradigms than OO.

    2. If you want to be taken seriously, learn the difference between "database" and "DBMS".

    3. The incompatibility of Java culture and DBMSes is mostly myth, IMHO caused by fear of the unknown and really misguided tools like Hibernate. Here is an example of a Java tool that bridges the gap: http://sourceforge.net/projects/amber-db.

     


    This is what really gets me..... why do you think that if the data access layer is outside the database that all the logic executes outside the database.  What's wrong with this:

    <FONT face="Courier New">SqlCommand cmd = new SqlCommand("UPDATE Products SET Price = Price * 1.1", cn);
    cmd.ExecuteNonQuery();</FONT>

    You don't have to be an idiot and write:

    <FONT face="Courier New">SqlCommand cmd = new SqlCommand("SELECT ProductID,  FROM Products", cn);
    DataReader dr = cmd.ExecuteReader();

    while(dr.Read())
    {
      SqlCommand cmdInner = new SqlCommand
                    ("UPDATE Products SET Price = " + 
                     (dr("Price") * 1.1).toString() + 
                     " WHERE ProductID = " +
                     dr("ProductID").toString(), cn);
      cmdInner.ExecuteNonQuery();
    }</FONT>

    Whatever you put in the SqlCommand (top example of course) will execute as efficiently as a stored procedure.

  • (cs) in reply to raluth

    raluth:
    Re the "why use SPs" debate; I'm not going to get into the performance discussion - instead: I work largely with brown-field, organically growing systems, e.g. where in a years time we need another sub-system bolting on, which re-uses the same e.g. customer model. Over this time technology also changes; language, architecture, etc. Using an SP abstracts this from the database; if I want to search for a customer, I call CustomerSearch (SP with params) - via whatever language I want. Given that this may be a complex SP involving dynamic/static SQL for different conditions, FTI if available / appropriate, etc, it would be a beast to maintain (coded directly) in various places (e.g. a .Net version, a VB6 version, etc; client caller? server caller?) - plus I would need to adjust each caller to implement a change. Just look at how many stabs it took to get the " " / "0" thing right above! It also allows more granular security; I can give a caller access to query the SP, but not the tables directly. Each way has pros / cons. For me, using SPs is a clear "win"...

    That problem has already been solved.  Use Web services, DCOM, or CORBA as an interface between your business logic and the data access code.  As for security, you can give the caller access to a method, but not the tables directly.  The caller doesn't even know the name of the database server!!!

    BTW, dynamic SQL doesn't work in stored procedures unless the caller has the permissions to run the dynamic SQL themselves.  That's my major problem with using SPs for security.  So, two of your advantantages above cannot be used simultaneously.  Either you deny access to the table or use dynamic SQL.

  • Doug (unregistered) in reply to jchart
    Anonymous:

    makomk:
    Incidentally, correct VB would be:

    Dim val as String, i As Integer
    For i = 1 to 50
      val = i
      If i < 10 Then val = "0" + val
      param = sqlCmd.createParameter("int_" + val + "_in",adInteger, adParamInput, 8, Null)
      sqlCmd.Parameters.Append param
    Next


    My VB is also slightly rusty, but I'm pretty sure that's right...

    Actuall, I think this would be better:

    <font face="Courier New">Dim i as Integer</font>

    <font face="Courier New">For 1=1 to 50</font>

    <font face="Courier New">  param=sqlCmd.createParameter("int_"&format(i,"00")&"_in",adInteger,adParamInput, 8 Null)</font>

    <font face="Courier New">  sqlCmd.Parameters.Append Param</font>

    <font face="Courier New">Next</font>



    I don't know any VB, but that seems pretty cool being able to assign 1 through 50 to the variable "1".  Though you forgot to use it in your loop.


    ;-)
  • (cs) in reply to jsmith
    jsmith:

    This is what really gets me..... why do you think that if the data access layer is outside the database that all the logic executes outside the database.  What's wrong with this:

    <font face="Courier New">SqlCommand cmd = new SqlCommand("UPDATE Products SET Price = Price * 1.1", cn);
    cmd.ExecuteNonQuery();</font>

    You don't have to be an idiot and write:

    <font face="Courier New">SqlCommand cmd = new SqlCommand("SELECT ProductID,  FROM Products", cn);
    DataReader dr = cmd.ExecuteReader();

    while(dr.Read())
    {
      SqlCommand cmdInner = new SqlCommand
                    ("UPDATE Products SET Price = " + 
                     (dr("Price") * 1.1).toString() + 
                     " WHERE ProductID = " +
                     dr("ProductID").toString(), cn);
      cmdInner.ExecuteNonQuery();
    }</font>

    Whatever you put in the SqlCommand (top example of course) will execute as efficiently as a stored procedure.



    jsmith -- what happens when that first update statement is really 60 lines long and accepts 5 parameters?  how do you debug it properly?   Where do you store the code for the T-SQL -- it is all inline, concatenated together?  Do you put it in a separate config file and load it in?

    When that code is in a stored procedure, you gain the advantage of:

    1 - any application, no matter what code they are written in, has a standard, single place to call this T-SQL code.
    2 - That T-SQL code can be fully tested, tweaked, modified, optimized, etc *without* looking through source code or config files, and/or recompiling the app. 
    3 - Your application code is much more shortable and more readable
    4 - Your T-SQL is verifed compiled.  Now,  I do agree that performance benefits of stored procs are overrated.  So forget about that.  But now your code is full syntax checked, objects are verified, column names are checked, you can examine the execution plan, etc.  Leaving the code to be executed from an ad-hoc string doesn't allow for any of this!  Don't you want your code to be verified before deploying your app?  When you write code in other languages that are compilable, do you compile your application before sending it out to production? Why not do the same to your database code?

    Another issue is the fact that in-line SQL tends to lead most developers down the path of concatenating parameters to the SQL statement and directly executing it instead of using parameters.

    So, I am not 100% sold on using stored procs all the time, but they have huge benefits and to ignore those benefits when working on a complex application would be a mistake.

    You also wrote earlier that for you, T-SQL is much harder to write than other languages.  This implies that your database code is probably not as cleanly written as it could be, since you admit to being a little weak when it comes to SQL.  Because if you are really good with it, you would know that pretty much *any* manipulation of data being done is much, much shorter, simplier, quicker and easier to write in SQL than most any other language I've worked with.  And if you are having trouble writing in SQL becuase you feel it is more difficult, isn't it eaiser to write it in an environment designed for it which compiles the code rather than embedded literals with SQL statements into your applicaiton?

    that is like saying "C++ is more difficult to write than VB ... therefore, instead of using Visual Studio to compile my C++ code, I will embedd it in string literals in my VB app and have it compiled 'on the fly' at execution time.  This will be easier than fully developing individual C++ libraries, compiling them, and then calling them from the VB application using a COM interface"  That doens't make much sense, does it?

  • (cs) in reply to jsmith
    SQL Server
    Welcome to 1997.

    Ever since SQL 7, ad-hoc statements have run as fast as stored procedures.  If these were run on a SQL server:

    select * from customer where customer_id = '816A4E82-615C-4BD7-9293-D92ED8BAE0ED'
    select * from customer where customer_id = '547E1A86-812F-4635-81DA-56DDF997EAA0'
    select * from customer where customer_id = 'CAD71FCF-1BC7-43b6-A3EA-6044DD383CE4'
    select * from customer where customer_id = 'E6FA1DE6-BDA1-4d85-AC1A-A4E5E3975870'

    They would get auto-parameterized and run exactly as if a stored procedure has been created and executed like this:

    CREATE PROC xyz (@p1 uniqueidentifier) AS select * from customer where customer_id = @P1
    GO
    EXEC xyz '816A4E82-615C-4BD7-9293-D92ED8BAE0ED'
    EXEC xyz '547E1A86-812F-4635-81DA-56DDF997EAA0'
    EXEC xyz 'CAD71FCF-1BC7-43b6-A3EA-6044DD383CE4'
    EXEC xyz 'E6FA1DE6-BDA1-4d85-AC1A-A4E5E3975870'

    There certainly are reasons to use SPs.  However, performance is not one of them.....

    Almost, but not quite.

    I invite you to run the following qurey:

    select cacheobjtype, sql from master..syscacheobjects

    Anything Auto-Parameterized should not have values hard-coded into them, and generally start with the parameters (@P1, usually).  You should see a shocking number of hard-coded values, if I am right.  You will see a bunch of calls to stored procedures with values attached, but these call the procedure plans that are (drumroll please) bare of any parameters at all.

    SQL server will try to auto-parameterize things, but it is very conservative (see SQL Server Books Online for confirmation of that).  This conservativeness will often prevent SQL Server from auto-parameterizing even simple queries.  And just how many queries in the average application are considered "simple", anyway?  Try this at home, if you are unconvinced.  Mileage may vary and all that other bs.

  • ca (unregistered) in reply to Good Architecture Man
    Anonymous:

    I have worked in this space for about 15 years. I have worked with the government, on projects for the olympics, for comapanie such as Enron, SBC and IBM and I have started two software companies, one of which I sold. Never have I seen a project improve because of SPs. Generally speaking, SPs were almost always directly related to projects slipping and increasing in budget.


    Maybe you should consider not making it 16 years.  I tire of folks like you being DBA's and forbidding the rest of us from writing SP's.  Some people actually write ad-hoc queries for a living and aren't in the business of leveraging our enterprise synergies to architect enterprise solutions for muddle tiers, and actually write ad-hoc queries.  You know, we're the folks that are getting work done when marcom and so forth wants a report or analysis, while all you project people throw middle tiers at us.  We get this work done because we don't have to smurf through the smurfing ... er, enterprising enterprise tier. 



  • (cs) in reply to Jeff S
    Jeff S:
    jsmith:

    This is what really gets me..... why do you think that if the data access layer is outside the database that all the logic executes outside the database.  What's wrong with this:

    <FONT face="Courier New">SqlCommand cmd = new SqlCommand("UPDATE Products SET Price = Price * 1.1", cn);
    cmd.ExecuteNonQuery();</FONT>

    You don't have to be an idiot and write:

    <FONT face="Courier New">SqlCommand cmd = new SqlCommand("SELECT ProductID,  FROM Products", cn);
    DataReader dr = cmd.ExecuteReader();

    while(dr.Read())
    {
      SqlCommand cmdInner = new SqlCommand
                    ("UPDATE Products SET Price = " + 
                     (dr("Price") * 1.1).toString() + 
                     " WHERE ProductID = " +
                     dr("ProductID").toString(), cn);
      cmdInner.ExecuteNonQuery();
    }</FONT>

    Whatever you put in the SqlCommand (top example of course) will execute as efficiently as a stored procedure.



    jsmith -- what happens when that first update statement is really 60 lines long and accepts 5 parameters?  how do you debug it properly?   Where do you store the code for the T-SQL -- it is all inline, concatenated together?  Do you put it in a separate config file and load it in?

    When that code is in a stored procedure, you gain the advantage of:

    1 - any application, no matter what code they are written in, has a standard, single place to call this T-SQL code.
    2 - That T-SQL code can be fully tested, tweaked, modified, optimized, etc *without* looking through source code or config files, and/or recompiling the app. 
    3 - Your application code is much more shortable and more readable
    4 - Your T-SQL is verifed compiled.  Now,  I do agree that performance benefits of stored procs are overrated.  So forget about that.  But now your code is full syntax checked, objects are verified, column names are checked, you can examine the execution plan, etc.  Leaving the code to be executed from an ad-hoc string doesn't allow for any of this!  Don't you want your code to be verified before deploying your app?  When you write code in other languages that are compilable, do you compile your application before sending it out to production? Why not do the same to your database code?

    Another issue is the fact that in-line SQL tends to lead most developers down the path of concatenating parameters to the SQL statement and directly executing it instead of using parameters.

    So, I am not 100% sold on using stored procs all the time, but they have huge benefits and to ignore those benefits when working on a complex application would be a mistake.

    You also wrote earlier that for you, T-SQL is much harder to write than other languages.  This implies that your database code is probably not as cleanly written as it could be, since you admit to being a little weak when it comes to SQL.  Because if you are really good with it, you would know that pretty much *any* manipulation of data being done is much, much shorter, simplier, quicker and easier to write in SQL than most any other language I've worked with.  And if you are having trouble writing in SQL becuase you feel it is more difficult, isn't it eaiser to write it in an environment designed for it which compiles the code rather than embedded literals with SQL statements into your applicaiton?

    that is like saying "C++ is more difficult to write than VB ... therefore, instead of using Visual Studio to compile my C++ code, I will embedd it in string literals in my VB app and have it compiled 'on the fly' at execution time.  This will be easier than fully developing individual C++ libraries, compiling them, and then calling them from the VB application using a COM interface"  That doens't make much sense, does it?


    I *never* said I was weak in T-SQL.  I have been an MCP in SQL since 1996.  It's T-SQL that's weak.  How can you say that a language with only one looping structure and one statement-level conditional structure is easy to use?  The basic cursor template is:

    <FONT face="Courier New">DECLARE authors_cursor CURSOR FOR
    SELECT au_lname FROM authors
    WHERE au_lname LIKE "B%"
    ORDER BY au_lname

    OPEN authors_cursor

    -- Perform the first fetch.
    FETCH NEXT FROM authors_cursor

    -- Check @@FETCH_STATUS to see if there are any more rows to fetch.
    WHILE @@FETCH_STATUS = 0
    BEGIN
       -- This is executed as long as the previous fetch succeeds.
       FETCH NEXT FROM authors_cursor
    END

    CLOSE authors_cursor
    DEALLOCATE authors_cursor</FONT>

    Notice the two fetches.  Because of a lack of a loop with a condition check at the end, fetch has to be done twice.  If a new variable is added, both fetches need to be updated.

    Also, SPs are NOT fully checked when they are created.  Table names and column names are not validated until the statements are run.  If an SP has several branches, testing is a pain.  You can create it WITH SCHEMABINDING to cause it to validate objects immediately, but then you have to drop the procedure to alter the underlying objects.

    Inlining code isn't a big deal.  All you need is a good tool.  It takes a few hours to write a tool that will inline any SQL and you'll never even see the string concatenation.  Put all the inlined SQL calls in a class module as static methods and it looks just like you are calling stored procedures, only easier.

    My development process is:
    1.  Write and test block of SQL in Query Analyzer
    2.  Wrap it in a method with tool
    3.  Write .Net code that calls block of SQL

    It looks identical to how most people write SPs.  It's so close that my SQL wrapper tool can actually make it a stored procedure with altering any of my calling code.

    As for concatenating, my wrapper tool ONLY supports parameters.  That's the only way to get information into the SQL block.  Concatenating SQL leads to SQL injection vulnerabilities.

    So, back to the original points:
    1 - Web services also provide a single point of access for any language.
    2 - My T-SQL is debugged in Query Analyzer.  The SQL source is stored seperately and is very simple to re-embed in the code.
    3 - The embedded SQL is in it's own code modules.  The actual business code isn't cluttered with anything.  IT LOOKS JUST LIKE A PROCEDURE CALL.
    4 - Your SP code is not verified either (see above).  Unfortunately with lazy object resolution, that is always an issue.  BTW, the first time one of my methods are run, the server has to parse the whole block of code.  If any part of it doesn't parse, the whole block is rejected.  So, as long as I run it at least once, I know that it is syntactically correct.  At the end of the day, I get as much validation as a stored procedure does.  The only way I can get burned is if I deploy a SQL block that I've never run even once.  I that case, I'd deserve whatever I get.

    Stored Procedures are just another form of modular programming.  They offer nothing magic or special.  When analyzed, nearly every "advantage" of a stored procedure is really just a choice.  Either get that advantage by modularizing within the database, or get that same advantage by modularizing outside the database.  However, T-SQL is a very weak programming language.  T-SQL only has 10 control-of-flow keyword.  Most modern programming languages have 50 or more.

    I love SQL for set oriented operations.  I'm no stranger to 300 line update statements.  But, it really is a BAD procedural language.  That's why they introduced CLR code in SQL 2005.

    The problem we run into here is that a lot of good programmers use stored procedures effectively.  Then they teach junior programmers that somehow their good programming practices were a direct result of using stored procedures and tell the junior that "they must use SPs".  The junior programmers then go on to make horrible applications.  Senior programmers should really be telling junior programmers how to make maintainable applications and always be on the lookout for better ways to make their applications maintainable.  Stored Procedures were the state of the art in the early 90s.  The world has moved on since then.  They still work and are still the best choice for some things, but there are better ways to do what most people do with stored procedures.  I'm not saying SPs have gotten worse, just the alternatives have gotten much better.

  • (cs) in reply to Disgruntled DBA
    Disgruntled DBA:
    SQL Server
    Welcome to 1997.

    Ever since SQL 7, ad-hoc statements have run as fast as stored procedures.  If these were run on a SQL server:

    select * from customer where customer_id = '816A4E82-615C-4BD7-9293-D92ED8BAE0ED'
    select * from customer where customer_id = '547E1A86-812F-4635-81DA-56DDF997EAA0'
    select * from customer where customer_id = 'CAD71FCF-1BC7-43b6-A3EA-6044DD383CE4'
    select * from customer where customer_id = 'E6FA1DE6-BDA1-4d85-AC1A-A4E5E3975870'

    They would get auto-parameterized and run exactly as if a stored procedure has been created and executed like this:

    CREATE PROC xyz (@p1 uniqueidentifier) AS select * from customer where customer_id = @P1
    GO
    EXEC xyz '816A4E82-615C-4BD7-9293-D92ED8BAE0ED'
    EXEC xyz '547E1A86-812F-4635-81DA-56DDF997EAA0'
    EXEC xyz 'CAD71FCF-1BC7-43b6-A3EA-6044DD383CE4'
    EXEC xyz 'E6FA1DE6-BDA1-4d85-AC1A-A4E5E3975870'

    There certainly are reasons to use SPs.  However, performance is not one of them.....

    Almost, but not quite.

    I invite you to run the following qurey:

    select cacheobjtype, sql from master..syscacheobjects

    Anything Auto-Parameterized should not have values hard-coded into them, and generally start with the parameters (@P1, usually).  You should see a shocking number of hard-coded values, if I am right.  You will see a bunch of calls to stored procedures with values attached, but these call the procedure plans that are (drumroll please) bare of any parameters at all.

    SQL server will try to auto-parameterize things, but it is very conservative (see SQL Server Books Online for confirmation of that).  This conservativeness will often prevent SQL Server from auto-parameterizing even simple queries.  And just how many queries in the average application are considered "simple", anyway?  Try this at home, if you are unconvinced.  Mileage may vary and all that other bs.


    I ran the four queries on my server, then ran "select cacheobjtype, sql from master..syscacheobjects" and I found this in the procedure cache:

    (@1 varchar(100))SELECT * FROM [customer] WHERE [customer_id]=@1

    Looks like the auto-parameterization system works well.

  • (cs) in reply to jsmith

    Programmer productivity should be the #1 optimization.

    This has to be the biggest WTF I have seen both here and abroad.  If this is true, then we should all go back to DOS, because GUIs are too hard for the programmers at Microsoft to make.  And just what happens to be the priority of the User?
  • IDemandBeer (unregistered) in reply to Anonymous
    Anonymous:
    Anonymous:
    I think you meant "loser coupling"


    Isn't that where baby nerds come from?


    *snarfs* *wipes coffee from monitor*

    HAHAHAHA




  • (cs) in reply to jsmith
    jsmith:
    Jeff S:
    jsmith:

    This is what really gets me..... why do you think that if the data access layer is outside the database that all the logic executes outside the database.  What's wrong with this:

    <FONT face="Courier New">SqlCommand cmd = new SqlCommand("UPDATE Products SET Price = Price * 1.1", cn);
    cmd.ExecuteNonQuery();</FONT>

    You don't have to be an idiot and write:

    <FONT face="Courier New">SqlCommand cmd = new SqlCommand("SELECT ProductID,  FROM Products", cn);
    DataReader dr = cmd.ExecuteReader();

    while(dr.Read())
    {
      SqlCommand cmdInner = new SqlCommand
                    ("UPDATE Products SET Price = " + 
                     (dr("Price") * 1.1).toString() + 
                     " WHERE ProductID = " +
                     dr("ProductID").toString(), cn);
      cmdInner.ExecuteNonQuery();
    }</FONT>

    Whatever you put in the SqlCommand (top example of course) will execute as efficiently as a stored procedure.



    jsmith -- what happens when that first update statement is really 60 lines long and accepts 5 parameters?  how do you debug it properly?   Where do you store the code for the T-SQL -- it is all inline, concatenated together?  Do you put it in a separate config file and load it in?

    When that code is in a stored procedure, you gain the advantage of:

    1 - any application, no matter what code they are written in, has a standard, single place to call this T-SQL code.
    2 - That T-SQL code can be fully tested, tweaked, modified, optimized, etc *without* looking through source code or config files, and/or recompiling the app. 
    3 - Your application code is much more shortable and more readable
    4 - Your T-SQL is verifed compiled.  Now,  I do agree that performance benefits of stored procs are overrated.  So forget about that.  But now your code is full syntax checked, objects are verified, column names are checked, you can examine the execution plan, etc.  Leaving the code to be executed from an ad-hoc string doesn't allow for any of this!  Don't you want your code to be verified before deploying your app?  When you write code in other languages that are compilable, do you compile your application before sending it out to production? Why not do the same to your database code?

    Another issue is the fact that in-line SQL tends to lead most developers down the path of concatenating parameters to the SQL statement and directly executing it instead of using parameters.

    So, I am not 100% sold on using stored procs all the time, but they have huge benefits and to ignore those benefits when working on a complex application would be a mistake.

    You also wrote earlier that for you, T-SQL is much harder to write than other languages.  This implies that your database code is probably not as cleanly written as it could be, since you admit to being a little weak when it comes to SQL.  Because if you are really good with it, you would know that pretty much *any* manipulation of data being done is much, much shorter, simplier, quicker and easier to write in SQL than most any other language I've worked with.  And if you are having trouble writing in SQL becuase you feel it is more difficult, isn't it eaiser to write it in an environment designed for it which compiles the code rather than embedded literals with SQL statements into your applicaiton?

    that is like saying "C++ is more difficult to write than VB ... therefore, instead of using Visual Studio to compile my C++ code, I will embedd it in string literals in my VB app and have it compiled 'on the fly' at execution time.  This will be easier than fully developing individual C++ libraries, compiling them, and then calling them from the VB application using a COM interface"  That doens't make much sense, does it?


    I *never* said I was weak in T-SQL.  I have been an MCP in SQL since 1996.  It's T-SQL that's weak.  How can you say that a language with only one looping structure and one statement-level conditional structure is easy to use?  The basic cursor template is:

    <FONT face="Courier New">DECLARE authors_cursor CURSOR FOR
    SELECT au_lname FROM authors
    WHERE au_lname LIKE "B%"
    ORDER BY au_lname

    OPEN authors_cursor

    -- Perform the first fetch.
    FETCH NEXT FROM authors_cursor

    -- Check @@FETCH_STATUS to see if there are any more rows to fetch.
    WHILE @@FETCH_STATUS = 0
    BEGIN
       -- This is executed as long as the previous fetch succeeds.
       FETCH NEXT FROM authors_cursor
    END

    CLOSE authors_cursor
    DEALLOCATE authors_cursor</FONT>

    Notice the two fetches.  Because of a lack of a loop with a condition check at the end, fetch has to be done twice.  If a new variable is added, both fetches need to be updated.

    Also, SPs are NOT fully checked when they are created.  Table names and column names are not validated until the statements are run.  If an SP has several branches, testing is a pain.  You can create it WITH SCHEMABINDING to cause it to validate objects immediately, but then you have to drop the procedure to alter the underlying objects.

    Inlining code isn't a big deal.  All you need is a good tool.  It takes a few hours to write a tool that will inline any SQL and you'll never even see the string concatenation.  Put all the inlined SQL calls in a class module as static methods and it looks just like you are calling stored procedures, only easier.

    My development process is:
    1.  Write and test block of SQL in Query Analyzer
    2.  Wrap it in a method with tool
    3.  Write .Net code that calls block of SQL

    It looks identical to how most people write SPs.  It's so close that my SQL wrapper tool can actually make it a stored procedure with altering any of my calling code.

    As for concatenating, my wrapper tool ONLY supports parameters.  That's the only way to get information into the SQL block.  Concatenating SQL leads to SQL injection vulnerabilities.

    So, back to the original points:
    1 - Web services also provide a single point of access for any language.
    2 - My T-SQL is debugged in Query Analyzer.  The SQL source is stored seperately and is very simple to re-embed in the code.
    3 - The embedded SQL is in it's own code modules.  The actual business code isn't cluttered with anything.  IT LOOKS JUST LIKE A PROCEDURE CALL.
    4 - Your SP code is not verified either (see above).  Unfortunately with lazy object resolution, that is always an issue.  BTW, the first time one of my methods are run, the server has to parse the whole block of code.  If any part of it doesn't parse, the whole block is rejected.  So, as long as I run it at least once, I know that it is syntactically correct.  At the end of the day, I get as much validation as a stored procedure does.  The only way I can get burned is if I deploy a SQL block that I've never run even once.  I that case, I'd deserve whatever I get.

    Stored Procedures are just another form of modular programming.  They offer nothing magic or special.  When analyzed, nearly every "advantage" of a stored procedure is really just a choice.  Either get that advantage by modularizing within the database, or get that same advantage by modularizing outside the database.  However, T-SQL is a very weak programming language.  T-SQL only has 10 control-of-flow keyword.  Most modern programming languages have 50 or more.

    I love SQL for set oriented operations.  I'm no stranger to 300 line update statements.  But, it really is a BAD procedural language.  That's why they introduced CLR code in SQL 2005.

    The problem we run into here is that a lot of good programmers use stored procedures effectively.  Then they teach junior programmers that somehow their good programming practices were a direct result of using stored procedures and tell the junior that "they must use SPs".  The junior programmers then go on to make horrible applications.  Senior programmers should really be telling junior programmers how to make maintainable applications and always be on the lookout for better ways to make their applications maintainable.  Stored Procedures were the state of the art in the early 90s.  The world has moved on since then.  They still work and are still the best choice for some things, but there are better ways to do what most people do with stored procedures.  I'm not saying SPs have gotten worse, just the alternatives have gotten much better.

     

    No matter what kind of tool you use, if you allow the database to accept raw SQL then any developer working on another project that touches that database can write inline sql to modify data withoiut going through yuour tool...  Then you have multiple applications dependant on the data structures, field names, (potentially field order as well) , and relational constraints within the dataabase.  Changing any of these is then extremely likely to break someone's app.   If on the other hand you require all access to go threough well-designed set of SPs, all development must go through them (no logins have select, update or delete permissions on raw tables)  Then, you can change, extend, or modify anything in the database without breaking any applicatiomn or middle  iier code, because you can modify the internals of the Stored proc and keep it's public interface immutable. 

  • (cs) in reply to jsmith
    jsmith:
    DrPizza:

    <SNIP>a bunch of completely factual, but yet not quite useful information</SNIP>

    DrPizza,

    Yes, stored procedures provide encapsulation, a well-defined protocol for accessing the database, partitioning, denormalizing, abstraction, decoupling, and scalability.  However so does:

    C#
    Java
    VB
    ... and 7 thousand other OO languages.

    Complete bollocks.  How the fuck does a piece of C# help ensure the data integrity of an old mainframe application we have working on the same database?  Oh, that's right.  It doesn't!

    Do you have anything useful to add?

    And of all of them, Transact-SQL is the hardest to write, maintain, and debug(except maybe malbolge).

    Rubbish.  T-SQL is a fuckload better than C# for writing data manipulation operations in.  SQL is a set-oriented language and it excels at set-oriented operations; it also has convenient support for transactions.  C# is neither. 

    I'm good at T-SQL, but I can still do anything in C# in half the time I can do it in T-SQL.  Decoupling example:  A working application needs a modification.  The security team decides that the current reversable encryption scheme in place for storing account password isn't good enough.  They now want a one-way hash with a salt.  Let's look at how our life would be in two situations:

    A.  You wrote all the DB access routines as SPs.  There are SPs for setting a password, and for validating a password.  Cool, now all you have to do is change the content of two SPs and run an update script on the current data (the current pw is reversible).

    Which is trivial.

    B.  You wrote all the DB access routines in C#.  There are methods on the User class for setting a password and for validating a password.  Cool, now all you have to do is change the content of two methods and run an update method on the current data (the current pw is reversible).

    So you need to do the same database work *anyway*.  And you have to add a load of C# work on top.  And a bit of C work too, for that application, and maybe some COBOL too, because you're still using that.  Great.

    "A" would be interesting to implement.  We could do one of several things;  1.  Roll our own hash algorithm (very bad idea), 2.  Write an extended stored procedure in C (That'll be fun, and look at the nifty distribution issues we've just created),

    All current databases worth using allow the use of high-level languages to implement functions within the database--which is fine.  Languages like Java or C# or C++ are good at string manipulation and this kind of complicated arithmetic, so we might as well use them for what they're good at.  In SQL Server you could write a C# function (which would just call some crypto library); in Oracle you could use Java (likewise); in DB2 I think you could choose between Java and .NET; in postgres you could use python or perl or tcl or C.  The distribution "issues" are a fuck of a lot less than having to change the application.  To "distribute" the change to the application you need to dump the change on every server and client that accesses the database, and there are inevitably more of these than there are database servers.  Making changes on the database side is easy; they're centralized and it's simple to ensure that all the upgrades happen simultaneously.  Making changes to database clients is not easy; they're not centralized and upgrading them all in parallel is difficult.

     

  • (cs) in reply to jsmith

    Auto-Parameterization works astonishingly well from Query Analyzer, and as I stated above for simple queries.  How many of your users use Query Analyzer, by the way? 

    My point is that queries (at least those submitted via OLEDB) are very often not parameterized.  In my own tests, parameterizing queries by using command objects and parameter objects will give you about the same speed as using an unparameterized call to a stored procedure (i.e. rs.open("procname parameters...")).  Using just rs.open(query) will hurt your performance, as you already seem to know.

    Using stored procedures, therefore, allows your developers to be as lazy as any "Good Architecture Men", and not parameterize.

  • Good Architecture Man (unregistered) in reply to Disgruntled DBA
    Disgruntled DBA:

    Programmer productivity should be the #1 optimization.

    This has to be the biggest WTF I have seen both here and abroad.  If this is true, then we should all go back to DOS, because GUIs are too hard for the programmers at Microsoft to make.  And just what happens to be the priority of the User?


    Quite honestly, you are an idiot. Which costs more? adding a new server to a farm to get capacity up or hiring two new developers because you need to add a new feature? Optimizing programmer productivity to server speed is a no brainer. Even the scary guy on the front page, Joel, says this. Frankly, if you don't get this, you are hopeless.

    When someone has the responsibility of running a business, you see things differently. Speed to market makes a huge difference, and making people more productive is what counts. You can always add more hardware, which is cheaper and quicker than adding new people.

    BTW. Your argument is a strawman, so, thank you.


  • (cs) in reply to Good Architecture Man
    Anonymous:
    Disgruntled DBA:

    Programmer productivity should be the #1 optimization.

    This has to be the biggest WTF I have seen both here and abroad.  If this is true, then we should all go back to DOS, because GUIs are too hard for the programmers at Microsoft to make.  And just what happens to be the priority of the User?


    Quite honestly, you are an idiot. Which costs more? adding a new server to a farm to get capacity up or hiring two new developers because you need to add a new feature? Optimizing programmer productivity to server speed is a no brainer. Even the scary guy on the front page, Joel, says this. Frankly, if you don't get this, you are hopeless.

    When someone has the responsibility of running a business, you see things differently. Speed to market makes a huge difference, and making people more productive is what counts. You can always add more hardware, which is cheaper and quicker than adding new people.

    BTW. Your argument is a strawman, so, thank you.




    And as I asked, what is the user's priority?  You spend a couple weeks (apparently) making the software that a user will use for 3 or 4 years.  And do your salespeople quote the "extra" hardware costs?  Or are these hidden from the customer while you go on about how "great" your software is?  If the makers of your compilers took this attitude, where would you be?  At a guess I would have to say you work for the Yugo Automobile Company.
  • (cs) in reply to jsmith
    jsmith:

    I *never* said I was weak in T-SQL.  I have been an MCP in SQL since 1996.  It's T-SQL that's weak.  How can you say that a language with only one looping structure and one statement-level conditional structure is easy to use?  The basic cursor template is:

    <font face="Courier New">DECLARE authors_cursor CURSOR FOR
    SELECT au_lname FROM authors
    WHERE au_lname LIKE "B%"
    ORDER BY au_lname

    OPEN authors_cursor

    -- Perform the first fetch.
    FETCH NEXT FROM authors_cursor

    -- Check @@FETCH_STATUS to see if there are any more rows to fetch.
    WHILE @@FETCH_STATUS = 0
    BEGIN
       -- This is executed as long as the previous fetch succeeds.
       FETCH NEXT FROM authors_cursor
    END

    CLOSE authors_cursor
    DEALLOCATE authors_cursor</font>

    Notice the two fetches.  Because of a lack of a loop with a condition check at the end, fetch has to be done twice.  If a new variable is added, both fetches need to be updated.

    A



    jsmith -- I am not trying to be condescending or sarcastic, but that statement clearly indicates that you have a *lot* to learn about SQL.   If your complaint about the language centers around the features and syntax of *cursors*, then you are not using SQL properly at all.


  • (cs) in reply to jsmith
    jsmith:

    My development process is:
    1.  Write and test block of SQL in Query Analyzer
    2.  Wrap it in a method with tool
    3.  Write .Net code that calls block of SQL

    It looks identical to how most people write SPs.  It's so close that my SQL wrapper tool can actually make it a stored procedure with altering any of my calling code.



    So your "tool" takes your verified SQL code and converts it to a method.  Does your tool also convert the method *back* to T-SQL code that you cut and paste into QA for further development?

    So, you need to amend your "developement process" with the maintenance process:

    4. Use your "tool" to convert code *back* to T-SQL to make changes
    5. Test your T-SQL
    6. Use your "tool" to convert it *back* to a method again to deploy the change
    7. recompile your application or data access layer
    8. redistribute your applicaiton or data access layer

    The funny thing is, like lots of people who rebel against best practices, you are working extremely hard just to avoid using stored procedures for reasons unknown. 

    Doesn't

    1. Create + test stored proc
    2. call the stored proc where needed in your code
    3. ALTER stored proc to make changes

    seem a little bit simplier rather than your 8-step process?

    Instead of trying proving that you can develop well and efficiently and all that without using stored procedures, why not just explain in a very simple paragraph the *benefit* you get by avoiding stored procedures. Don't try to prove to use that with the proper tools and a bunch of extra steps and all that, it is "just as good" (that has been your whole position so far), explain why you feel it is *better* to not use stored procs.

  • Anthony (unregistered) in reply to useless user
    Anonymous:

    I don't understand what is so hard to grasp about using dynamic SQL statements. Sure they c, but after all the hoops and crazy joins it sounds like you have to jump through, it just sounds easier to do a direct statement.

    Or does concatenating a string seems to non-|33+ for you?


    Firstly, it's not strictly true anymore that "Dynamic SQL is not compiled". Some servers (MS SQL 2005 comes to mind) are smart enought to notice that you are issuing the same query that you did a minute ago, and to reuse the plan.

    Secondly, I agree with the rest: when building a complex query, for pete's sake do it in a suitable language such as Java, C# or the like, not in T-SQL. WTF?

    Thirdly: The captch on this board sucks ass.
  • (cs) in reply to Good Architecture Man

    Anonymous:
    Disgruntled DBA:

    Programmer productivity should be the #1 optimization.

    This has to be the biggest WTF I have seen both here and abroad.  If this is true, then we should all go back to DOS, because GUIs are too hard for the programmers at Microsoft to make.  And just what happens to be the priority of the User?


    Quite honestly, you are an idiot. Which costs more? adding a new server to a farm to get capacity up or hiring two new developers because you need to add a new feature? Optimizing programmer productivity to server speed is a no brainer. Even the scary guy on the front page, Joel, says this. Frankly, if you don't get this, you are hopeless.

    When someone has the responsibility of running a business, you see things differently. Speed to market makes a huge difference, and making people more productive is what counts. You can always add more hardware, which is cheaper and quicker than adding new people.

    BTW. Your argument is a strawman, so, thank you.

    Uh, can I be one of the 2 new developers you're talking about? Throwing hardware at unoptimized code can cost tens of millions of dollars (after licenses, support, and whatnot), so these guys are making good money...

    As for time to market, the developers who are best at that are also the best at maximizing performance, precisely because they are understand the concept of tradeoffs, unlike our quite honest friend.

     

  • (cs) in reply to Jeff S

    I think one of the benefits of stored procs is that they're stable.  A lot of people here seem to be very into .Net.  That's fine.  But what about when the next big thing comes out?  I used to work at a company that used Vignette StoryServer as our front end, Sybase as our back end.  Then we switched to Cold Fusion.  Then ASP.  I'm soooo glad that 96% of our database access was in stored procedures, otherwise we would have been re-writing them every year and a half - not to add improvements but just to port them over.  How are you going to export your .Net data access thingamajiggies when you're working with something other than .Net?  The only things I still use on my job today that I used in 1999 are Javascript, HTML, and stored procedures, everything else has changed.  If I'd ever implemented a database access layer in anything other than SPs I'd have had to throw it away which would have been a lot of wasted work.

    I'm assuming that everyone that dislikes stored procs must run their application server and database server on the same physical machine.  Otherwise for the life of me I can't see why you'd want to push 10,000 rows of data across the network from the DB server to the app server so that the app server can crunch the data and print 5 numbers on screen.  Pushing large amounts of data across the wire is a bad idea network wise, and pushing table names across the wire a bad idea security wise.

    I'm assuming that people who dislike SPs may also dislike triggers.  Do you?  We make extensive use of triggers and history tables at my job to provide a full and complete audit of all user modification of tables.  What do you all do?  Write a text file log?  Manually update a history table afterwards?  Include history table update statements in your block of sql every single time?

    I would agree with jeff also about cursors.  It took me years to wean myself off of them.  They're hardly ever necessary in a database, they're just easy for programmers because they're procedural instead of set-based.  I think the earlier example got all books by author with a name starting with b and added 10% to the price - this should be a single sql statement, not a loop of any kind.

  • (cs) in reply to TomCo
    TomCo:

    Ouch.  My right buttock has locked up!  Someone please pass me the named pipe.

    Wouldn't that be a crack pipe?

  • (cs) in reply to [Si]dragon
    Anonymous:

    Sticking to these simple principles is a Good Idea.  For example, consider the plight of your typically hapless developer.  There is no reason that he or she should have to also be an expert in whatever database you're using, let alone anything.  Period.


    And that's the point where the problems start ...


    Anonymous:

    That means using services that do the mapping between business objects and persistence.  That work is best handled by someone who is a database expert.


    And why not let the database experte teach the "hapless developer" some methods on how to access the database "the right way"?

    Anonymous:

    I can come up with more reasons but there are plenty of materals on the subject which support this.  To get you started, Google on "Gang of Four", "Spring", "Hibernate", "Pragmatic Programmer", and so on.  Enjoy.



    db agnostic does make sense in some cases ... and none it others.

    l.

  • (cs) in reply to Good Architecture Man
    Anonymous:

    I have worked in this space for about 15 years. I have worked with the government, on projects for the olympics, for comapanie such as Enron, SBC and IBM and I have started two software companies, one of which I sold. Never have I seen a project improve because of SPs. Generally speaking, SPs were almost always directly related to projects slipping and increasing in budget.


    And boy must you have been busy working so much for so many companies on such huge projects that you can lean yourself out of the window with such a claim.

    Anonymous:

    Stored procedures were good back when the only alternative was writing straight SQL in your middle tier somewhere (and even then, most of us wrote our own ORMs), but today, in 2006, they serve very little purpose. I view them as a vestige of software's younger, less mature years. You can view them fondly if you wish, but to move on to serious development, you will have to let go or the world will pass you by.


    Oh please, not the "serious development" fairy tale. Let's just agree that you (or the people you worked with) had no idea what SPs were good for and therefore messed it up.

    l.
  • (cs) in reply to jsmith
    jsmith:

    "A" would be interesting to implement.  We could do one of several things;  1.  Roll our own hash algorithm (very bad idea), 2.  Write an extended stored procedure in C (That'll be fun, and look at the nifty distribution issues we've just created),  3.  Move the logic back to the application (hey, where did our loose coupling go?).  If anyone has a better solution, please speak up.



    Sure. Use mechanisms provided by the database (or import them). Isn't live unfair, if you only know the MS world?

    l.
  • (cs) in reply to Good Architecture Man
    Anonymous:

    Quite honestly, you are an idiot. Which costs more? adding a new server to a farm to get capacity up or hiring two new developers because you need to add a new feature?


    And what happens, when that approach doesn't achieve the desired effect?

    Anonymous:

    Optimizing programmer productivity to server speed is a no brainer. Even the scary guy on the front page, Joel, says this. Frankly, if you don't get this, you are hopeless.


    There are no silver bullets. Depending on the complexity of an application, design flaws will and can not be compensated by "just adding another server". Y2K is quite a famous example for short sighted implementation, and so is this concept.

    Anonymous:

    When someone has the responsibility of running a business, you see things differently.


    Not really - unless you want solutions that get thrown out at the earliest possible opportunity.

    Anonymous:

    Speed to market makes a huge difference, and making people more productive is what counts. You can always add more hardware, which is cheaper and quicker than adding new people.

    BTW. Your argument is a strawman, so, thank you.


    So is your's. Either use the right people for the right job or live with mediocre solutions and marketing blabla. Guess which approach I'm in favor of.

    l.
  • (cs) in reply to lofwyr
    lofwyr:
    jsmith:

    "A" would be interesting to implement.  We could do one of several things;  1.  Roll our own hash algorithm (very bad idea), 2.  Write an extended stored procedure in C (That'll be fun, and look at the nifty distribution issues we've just created),  3.  Move the logic back to the application (hey, where did our loose coupling go?).  If anyone has a better solution, please speak up.



    Sure. Use mechanisms provided by the database (or import them). Isn't live unfair, if you only know the MS world?

    l.

    Please tell me how to do this in SQL 2000 or earlier.  They added CLR support to 2005 which finally solves the problem, but that was 15 years after people started recommending SPs to do everything.  What did they do for the previous 14 years?  What about those who don't have 2005 yet?  If only you knew the MS world well enough to know that Microsoft SQL Server 2000 doesn't have a cryptography library.
  • (cs) in reply to codeman

    WTF??!!

  • (cs) in reply to wbrianwhite

    wbrianwhite:
    I think one of the benefits of stored procs is that they're stable.  A lot of people here seem to be very into .Net.  That's fine.  But what about when the next big thing comes out?  I used to work at a company that used Vignette StoryServer as our front end, Sybase as our back end.  Then we switched to Cold Fusion.  Then ASP.  I'm soooo glad that 96% of our database access was in stored procedures, otherwise we would have been re-writing them every year and a half - not to add improvements but just to port them over.  How are you going to export your .Net data access thingamajiggies when you're working with something other than .Net?  The only things I still use on my job today that I used in 1999 are Javascript, HTML, and stored procedures, everything else has changed.  If I'd ever implemented a database access layer in anything other than SPs I'd have had to throw it away which would have been a lot of wasted work.

    I'm assuming that everyone that dislikes stored procs must run their application server and database server on the same physical machine.  Otherwise for the life of me I can't see why you'd want to push 10,000 rows of data across the network from the DB server to the app server so that the app server can crunch the data and print 5 numbers on screen.  Pushing large amounts of data across the wire is a bad idea network wise, and pushing table names across the wire a bad idea security wise.

    I'm assuming that people who dislike SPs may also dislike triggers.  Do you?  We make extensive use of triggers and history tables at my job to provide a full and complete audit of all user modification of tables.  What do you all do?  Write a text file log?  Manually update a history table afterwards?  Include history table update statements in your block of sql every single time?

    I would agree with jeff also about cursors.  It took me years to wean myself off of them.  They're hardly ever necessary in a database, they're just easy for programmers because they're procedural instead of set-based.  I think the earlier example got all books by author with a name starting with b and added 10% to the price - this should be a single sql statement, not a loop of any kind.

    Can there only be two physical tiers to an application?  (Please note that I'm talking about physical tiers here, not logical)

    A three tier approach eliminates this problem.  First tier is the app, second is app server, third is the database.  If you have to rewrite the first tier, you can still keep the second.

    Also, just because the code isn't in SPs doesn't mean it won't run in the DB server.  If an app issues "UPDATE products SET price=price*1.1", it doesn't pull any rows across the network.

    Finally, I don't dislike SPs.  I love triggers and I think that's one of the main applications of logic in T-SQL.  I dislike the philosophy of forcing logic into SPs for bogus reasons.  Modularity is a good goal, but the end result of modularization does not have to be SPs.

Leave a comment on “The Stored ÜberProcedure”

Log In or post as a guest

Replying to comment #:

« Return to Article