• (nodebb)

    Transactions are hard.

    Like, that's just your opinion, man.

  • (nodebb)

    It's always good to think through how any given database operation behaves inside of a transaction.

    More to the point, it's always good to think about how any given sequence of database operations behaves outside of a transaction. (Is it, for example to create gibberish in the database by crashing halfway through the sequence?)

  • Darren (unregistered)

    Based on the myriad of WTFs we get here it's quite obvious that developers don't being to start to formulate the possibility of having a thought as to how a database operation works. That's the job of the framework or DBA, right?

  • (nodebb)

    Meh, transactions just sound like some pointless paranoia stuff. Surely that adds complexity at runtime and more importantly more code to write and think about. Ain't got no time for that. I mean it's a database, it's built to execute operations, so why would it fail? And if it does, we'll see them errors in the logs or some kind of exception somewhere. I mean, somebody else will see them because I have better things to do than check logs or catch exceptions or handle return error codes. That's for noobs, I write pro code that doesn't fail.

  • (nodebb) in reply to Ralf

    we'll see them errors in the logs or some kind of exception somewhere

    Except in the first commercial system I worked on where some bright spark had the idea to use a database table for logging. It worked really well except when some error occurred that caused the transaction to abort and roll back the database, including, of course, the log table.

  • Darren (unregistered) in reply to jeremypnet

    In a similar vein I was once asked to setup a system that would email the business if the email server went down.

    I suppose you could do that nowadays with Exchange SE - but only if the SE stands for Schrödinger Edition...

  • Faroguy (unregistered)

    Let's just say that choices were made 15 years ago and ever since I started the job in 2023 I've been handling the consequences.

  • Richard Brantley (unregistered)

    I worked with a product made by a vendor who said 'if the process errors out, just roll back the database.'

    That was, sadly, not the worst of their coding ills.

  • gman003 (unregistered) in reply to jeremypnet

    At one of my first coding jobs, I implemented error logging in the database. Which worked up until an error occurred inside the log-to-the-database code. Which caused it to attempt to log an error to the database. Which caused another error. Which - well, you can see where this is going. (This is why you don't delegate your logging system to the intern, kids!)

    We were several months into crunch at that point and nobody was looking at the error logs, so we just disabled it. And shipped that way. And at the time I left, a decade and a few promotions later, we still didn't have any error logging.

    I should ask if they ever did re-implement that, next time they call asking me to contract for a bit.

  • (nodebb)

    Transactions are used to make non-atomic database operations atomic. When the transaction happens, the database is locked, every operation in the transaction happens, then it ends. Or if something happens that makes it not work, it's rolled back.

    Why you want transactions? Let's say you are transferring money between accounts. One account will be debited that amount, and another account will be credited. Standard database stuff. But so many potential errors could happen - an account may not exist. Or maybe the source account doesn't have enough money. Or what if someone is trying to do the old classical "why we use mutual exclusion" hack to double-debit by sneaking a transaction in-between.

    You could do this in the application code - test for all these conditions and hope you don't create an accidental TOCTOU bug while you're handling the various error conditions. Or you could just create a transaction, and if an error happens, the transaction fails and everything is put back to where we started and nothing happened. And all application code had to do was execute and check the result, not handle a bunch of edge cases that get complicated quick because you want to avoid a TOCTOU at the same time.

    (TOCTOU - time of check to time of use - a common security flaw where you check an attribute of an object, then you do something with the object, but there's a small chance between the two the object could get replaced. A common object is a file - you check if you can open a file, then you open the file, but someone switches the file from below us by repointing the link, for example. Another famous TOCTOU is the Nintendo Gameboy where the Nintendo logo is embedded in the game cartridge, but the ROM checks to see if those 20 bytes are what they should be before passing to the display code those bytes. With a bit of hardware craft, you can pass the first check then use a second image instead.)

  • (nodebb) in reply to Ralf

    Transactions only matter when it's important that data in your database is fully consistent. I'm not sure when that's not important.

    I suppose if you've never worked on a system with real concurrency then you may have been lucky in that you've never been bit by inconsistent data issues.

    Even you don't explicitly start a transaction, every individual database query/update/insert/delete operation is implicitly running in a transaction, it's just (generally) a small one which isn't linked to other database operations being performed as part of the same "business transaction". If all your DB operations are in the same transaction, then if any of them fail (e.g. due to a foreign key constraint violation), all the others rollback automatically as opposed to the alternative where some of your data was saved, but is now inconsistent due to the part which didn't get committed.

    I know as well that EF Core's "SaveAll" is all within the same transaction, so even for developers unaware of transactions, if they're using an ORM they may very well be generally leveraging transactions along with their benefits without realizing it.

    Addendum 2026-02-13 04:12: (For anyone not interpreting the Ralf's comment as sarcastic)

  • ruebenchandler (unregistered) in reply to jeremypnet

    Database tables is an excellent way to implement logging. Its use-cases for analytics are way ahead of text-based logs.

    You have to implement it in its own session to enable it to handle table writes in its own transactions (or inside an autonomous transaction where the RDBMS supports it) which typically means you write and commit the transaction immediately as well (you can see the operations running in real-time if you have the access level!).

    But yes it's a bit bone-headed to write to the log table in the same transaction as the business logic currently executing.

  • Ralph (unregistered) in reply to ruebenchandler

    I've had to analyse transactions which failed inside stored procedures, and the only option how to get behind the reason was to write messages to a log table - which of course was a dead end initially as the messages were indeed gone with the rest of the transactional changes. That was until I discovered Oracle's "pragma autonomous" instruction, which lets a stored procedure act outside the currently running transaction. Built my logging procedure with it, and was finally able to enjoy a full log of all the disastrous things happening inside the failed transactions. I have read countless arguments why "pragma autonomous" is a bad thing and if you need it you made mistakes elsewhere anyway, yada yada - but for logging inside transactions it is unbeatable and I am currently missing it sadly e.g. in PostgreSQL. If anyone knew a not too complicated alternative, I'd be all eyes.

Leave a comment on “Consistently Transactional”

Log In or post as a guest

Replying to comment #:

« Return to Article