• Darren (unregistered)

    It's a shame that so many developers fail to take into account how their code will work at scale. Sure, it works fine when it's working with a hundred records - but what happens to memory, network bandwidth and CPU usage when it's dealing a hundred thousand records. It's all so easy to pass that off as Someone Else's Problem, but just taking that little bit of time to think of the future will actually save the developer - and all the sysadmins - time and effort.

    An hour's work today will avoid a week's work in the future.

    Tangentially, I had a self-described 'Data Analyst' who complained that one of his processes kept failing and therefore the IT systems were terrible. Turned out he was essentially doing a 'Select * from *' on a database and putting the results into Excel so he could run a filter on it.

  • TheCPUWizard (unregistered)

    Disagree... Consider an application which uses ZipCodes [5 digit] You need information about your customers [their address has a zip code] and some demographics [stored in a zip code table....]... MUCH better (subject to memory) to read the 41,642 rows of the zip code once, rather than doing a join on 10K customers and having the same static information returned on each row.

  • (nodebb)

    As a general rule, if you're using an RDBMS and can solve your problem using SQL, you should solve your problem using SQL.

    And this rule explains all those posts we've had about people trying to do date calculations or basic mathematics using SQL queries rather than writing application-level code for that.

    I'd also suggest that the problem was not that the dev used a query, but that the dev used a query.

  • Greg (unregistered) in reply to Darren

    And on the opposite side of the spectrum there's the old adagio "Premature optimization Is the root of all evil"... I'm guessing this started out with only a few tables where the posted approach still worked decently. Nevertheless someone should have changed this at some point.

  • Hanzito (unregistered)

    Seeing that everything already is in separate tables, wouldn't the wiser decision have been to not use an Entity table, or if you really, really wanted to (because, say, some Architect had Decided that there would be an Entity Event Bus) use a view that combined the tables? Anyway, a horrible thing.

  • Seirios (unregistered)

    Even on the database. If you can do it in Sql instead of a procedure, do it in Sql. The description / name should have been a field in the entity table and the whole process would have been a simple select on one table.

  • Pag (unregistered)

    Would it help to include EntityType in the conditions for the joins: ON E.EntityType='Application' AND E.EntityID=A.EntityID etc.? This would allow the database to optimise the query execution.

  • Tinkle (unregistered)

    I am more concerned about the fact that you can have an Entity and not know if you are working with an Automation, a Group, an Organization, a Person, a Resource or a Batch.

    Are they just bundling fields with the same name into a common table simply because they are the same type of data?

    The entire Entity table seems wrong to me. As a minimum it is terribly named, as Entity tells me nothing about the table.

  • RB (unregistered) in reply to Darren

    "An hour's work today will avoid a week's work in the future"

    But managers do not think like that. An hour's work now is an extra $XX I have to pay now, so no.

  • kythyria (unregistered)

    The strange thing is that all these types of entities have a name and yet the name is not in the common table. So it's a two-layer WTF, the schema is WTFy and then WTFy code to process it.

  • Steve (unregistered)

    This is a case where something that is good in one context is bad in another. Assuming this is SQL Server, you might see that you can write a function, and think to yourself that a function that encapsulates some logic would be a great thing. Unfortunately, a scalar function (as this is written) makes a query ineligible for parallelism. So if you use this function in your query and it has to process a bunch of records, it will have to process them one by one using a single thread. The fix as described might speed things up some, but I'd bet that the query still doesn't go parallel with large data. This particular function could be rewritten as an inline table valued function, which would allow the server to use parallelism, and then your performance hit would be much smaller. But this is one of those cases where SQL Server shows you functionality that looks nice and shiny, but ends up punching you in the face, and it's not really clear why.

  • Dave (unregistered)

    Since they're dealing with a SQL system that has a case statement, it likely has sub-select capabilities too. Instead of reading the different name columns across the join, they could have sub selected from just the table they know they're operating against in each case statement, so the main clause only needs the Entities table. A bit more PL/SQL approach than the pure SQL statement they used, but still more SQL focused than a pure PL/SQL approach where you just end up with if statements to decide which secondary query to run.

    But it's a horrible schema design. If Entities doesn't contain a common name field, WTF does it contain that led to it being normalized into its own table?

  • RLB (unregistered)

    Or, y'know, you could use a union

  • (nodebb) in reply to RB

    And there's more profit to be made if you charge a week's work versus an hour.

  • Duston (unregistered)
    Comment held for moderation.
  • dusoft (unregistered)
    Comment held for moderation.
  • SomeDBA (unregistered) in reply to RLB

    Very much this.
    Functions operate in a loop, and calling more than one record will compound the issue.

Leave a comment on “Join Our Naming”

Log In or post as a guest

Replying to comment #:

« Return to Article