• TheCPUWizard (unregistered)

    Yes, setting to null (when about to go out of scope) does tend to indicate misunderstanding of the GC process.. OTOH, having the Close then Null in the main body and then the null check with the insurance close in the finally DOES have some use in traceability [a break point can be set when the close is in the finally, and the result set will still be open and accessible]

  • (nodebb)

    Arbitrary is not 4000, that would be MSSQL'sNVARCHAR(MAX) or whatever is the Oracle equivalent.

    But more importantly, why is column resizing done in code? Do they expand the columns basing on user input? What if 2 threads are processing user input at the same time? What if the size parameter is greater than 4000? WTF, man

  • Yazeran (unregistered) in reply to Mr. TA

    Yea, second this.

    Allowing changes to the table definitions from user code (even if this is an administrator interface) is not a good idea.....

    Better to do it from psql or whatever your DB-engine has for command line access and with a transaction encapsulation so you can rollback if something goes haywire

  • Sauron (unregistered)

    WTF haiku of the day:

    Broken database.

    For weeping developers,

    Disaster awaits!

  • John (unregistered) in reply to Yazeran

    My guess would be that it's part of some kind of setup/update program.

  • (nodebb) in reply to John

    I really hope what you said is true. If this code is not, we have bigger a much bigger WTF.

  • (nodebb) in reply to Mr. TA

    VARCHAR2(4000) is Oracle's VARCHAR(max), or you could use CLOB

    There is a setting to make it bigger, but I doubt many people would change that setting anyway.

  • Yazeran (unregistered) in reply to John

    But why would it then use string concatenation to do the changes/updates? If it is part of an update script, would all SQL commands not be fixed strings crafted to the specific update from version x.y.z to x1.y1.z1?

  • (nodebb) in reply to Yazeran

    The only acceptable scenario would be a "user defined field" management system, typically as part of a business application. But I somehow very much doubt that that's what's going on here.

  • (nodebb) in reply to miquelfire

    It looks like Oracle's VARCHAR2(4000) is the equivalent of MSSQL's VARCHAR(4000), whereas MSSQL VARCHAR(MAX) doesn't have a direct counterpart in Oracle. Oracle's CLOB is similar to MSSQL TEXT type; it looks like the 2 vendors went in different directions - Oracle chose to invest in making CLOB perform as good as possible, and MSSQL chose to create VARCHAR(MAX) which performs better, and to soft-obsolete TEXT.

    Addendum 2024-04-30 08:50: PS. The NULL treatment of VARCHAR2 in Oracle notwithstanding, of course.

  • Duston (unregistered) in reply to Yazeran

    "But why would it then use string concatenation to do the changes/updates?" Because if it's Enterprise-y enough, the update parameters will be in an XML file somewhere. That way you can reuse the update code, and code re-use is the bomb, right?

  • (nodebb) in reply to Mr. TA

    Hi,

    VARCHAR(MAX) and NVARCHAR(MAX) are the same as NTEXT/TEXT (Oracle: CLOB) and have nothing to do with VARCHAR/NVARCHAR. It's just the SQL standard way to describe a "unlimited" text column. I think in MSSQL the limit is 4 billion characters, not 4000.

    VARCHAR and NVARCHAR had previously a limit of 4000 characters, I think by now it's 8000.

    Addendum 2024-04-30 11:37: Oh BTW, VARCHAR(4000) is the same as VARCHAR(4000 CHAR) in Oracle. I know. If you don't actually use the CHAR modifier, Oracle uses bytes and depending on the encoding this can be as much like 6 bytes (UTF-8). To make matters worst, old versions of Oracle limited VARCHAR to 4000 bytes; so depending on what character encodings you were using, you had less characters available (including wasted bytes if you divided correctly).

    And finally while TEXT and CLOB started out similar as automatically encoded BLOBs, MS improved TEXT hugely and put it over time on feature parity with VARCHAR while CLOB basically stayed the same. So yeah, Oracle is with good reasons the worst relational DB these days.

  • (nodebb) in reply to John

    My guess would be that it's part of some kind of setup/update program.

    The article says it's ETL, so it's more likely that they want the destination to match the source and they have no idea (or no care) how/when/why the source schema gets changed. They were charged with copying the data, so they'll copy to a place that will hold the data.

  • (nodebb) in reply to MaxiTB

    Hate to have to correct this point but no, TEXT and VARCHAR(MAX) are NOT the same. In fact, VARCHAR(MAX) has a lot to do with VARCHAR(N). Both VARCHAR and NVARCHAR have a limit of 8000 bytes, which is 8000 chars for VARCHAR and 4000 chars for NVARCHAR.

    https://stackoverflow.com/questions/834788/using-varcharmax-vs-text-on-sql-server

    Regarding MSSQL TEXT vs Oracle CLOB, again, you're incorrect about their comparison. In Oracle, CLOB behaves more like MSSQL's VARCHAR(MAX) - it automatically tries to store it in the row page if it's 8000 bytes or less, and moves to LOB if more than 8000 bytes. Whereas MSSQL TEXT continues the legacy behavior of always storing in LOB regardless of size. So, it's exactly the opposite of what you wrote.

    Still, Oracle is a terrible company and I wouldn't use any of their stuff even if they had some advantages over MSSQL in some respects. (It's possible I'd go with a free DB like PostgreSQL or MySQL if there was a client/hosting/etc. requirement or monetary considerations instead of MSSQL. But 0.000001% chance I would use Oracle.)

  • (nodebb)

    EFcore - no no no. Never use EF or EFCore, or NHibernate. Just don't.

    Regarding the reader closing, no, it's not done correctly. Not only setting it to null is useless in the finally block, but disposing it in the try block while there is already a disposal in the finally is unneeded.

  • (nodebb)

    I think what's going on here is that this code is intended to deal with databases in an old format. Yeah, a security issue to run such stuff under user control but in the absence of a formal process for database management it's not insane. Most of this code appears to be set up to make do with flawed inputs.

    As for the setting to null--I think I understand that one. It is ensuring that the reader never appears valid when it is not and while I do not think this is the right answer neither do I consider it bad. Having an object that doesn't work isn't a good thing, but the right answer is a using statement. It's only exists in the scope where it's valid and the cleanup is dealt with automatically.

  • (nodebb) in reply to Mr. TA

    NHibernate was never good - all people that are using it are just doing it because migrating would be a ton of effort and by that point you can rewrite everything anyway. EF6 was better but still a coin toss situation (migrations via XML were a pain, also dealing with defaults and n:n navigation scenarios urgh) however EFcore has nothing to do with previous versions, it was build from the ground up - so all those shortcomings I hated before are gone.

    And as I said before, for high performance situations it might be better to go deeper, but for most business cases it doesn't matter (f.e. in my last real-life web service efcore used up less than 0.1% of the total costs of most web api calls, going deeper would be overkill).

    When it comes to setting the instance to null after a Dispose, that's defensive programming. For example in modern .net you would get a warning if you try to use the reference by the compiler. So if someone puts a code afterwards, it wouldn't result in a runtime crash. Personally I'm not doing it, but I have worked at clients where this was mandated even long before we had non-nullable reference types.

    The dispose in the try block is useless; I didn't notice that one before.

  • (nodebb)

    this is, by the by, an Oracle database

    Ah, I can your WTF right there. Not sure what all the other stuff was.

  • TheCPUWizard (unregistered)

    " EFcore supports batch operations" -- except it is broken in a few (near edge, but not uncommon) cases and may fall back to individual commands [Had a system roll over last month, a batch operation that typically ran in under 1 minute did not complete overnight).

  • (nodebb)

    Forgot to mention--you don't let the garbage collector clean up things that refer to any sort of external resource.

  • Duke of New York (unregistered)

    Am I correctly reading that if a column were null, they wouldn't convert it, but they'd fall through and use the variables with either the initial empty string or a value leaked from a previous iteration?

  • Wim de Lange (unregistered) in reply to Yazeran

    A rollback on an ALTER TABLE does not work.

  • Craig (unregistered)

    System.Diagnostics.Debug.* is still appropriate for getting diagnostic info dumped when running under the debugger. I'd never consider any substitute for logging, though, it wouldn't even have occurred to me that someone might use it that way.

Leave a comment on “Finding the Right Size”

Log In or post as a guest

Replying to comment #:

« Return to Article